2016-08-10 15:44:56,663 INFO [main] hbase.HBaseTestingUtility(496): Created new mini-cluster data directory: /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/dfscluster_a0561d32-3b2b-4cd9-bf07-980f21f6d1bd, deleteOnExit=true 2016-08-10 15:44:58,826 INFO [main] zookeeper.MiniZooKeeperCluster(276): Started MiniZooKeeperCluster and ran successful 'stat' on client port=50432 2016-08-10 15:44:58,849 INFO [main] hbase.HBaseTestingUtility(1013): Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s) 2016-08-10 15:44:58,849 INFO [main] hbase.HBaseTestingUtility(743): Setting test.cache.data to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/cache_data in system properties and HBase conf 2016-08-10 15:44:58,850 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.tmp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_tmp in system properties and HBase conf 2016-08-10 15:44:58,850 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.log.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_logs in system properties and HBase conf 2016-08-10 15:44:58,850 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.local.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/mapred_local in system properties and HBase conf 2016-08-10 15:44:58,851 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.temp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/mapred_temp in system properties and HBase conf 2016-08-10 15:44:58,851 INFO [main] hbase.HBaseTestingUtility(734): read short circuit is OFF 2016-08-10 15:44:58,966 WARN [main] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2016-08-10 15:44:59,132 DEBUG [main] fs.HFileSystem(221): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2016-08-10 15:44:59,702 WARN [main] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2016-08-10 15:44:59,797 INFO [main] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-08-10 15:44:59,852 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:44:59,879 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/hdfs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_56217_hdfs____.6brtmn/webapp 2016-08-10 15:44:59,993 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:56217 2016-08-10 15:45:00,417 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:45:00,420 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_56220_datanode____i04cgm/webapp 2016-08-10 15:45:00,491 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:56220 2016-08-10 15:45:01,075 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a node DatanodeRegistration(127.0.0.1:56219, datanodeUuid=df30d679-96d3-4692-b684-a43b060adbff, infoPort=56221, infoSecurePort=0, ipcPort=56222, storageInfo=lv=-56;cid=testClusterID;nsid=770281502;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-08-10 15:45:01,076 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-1306e099-b7c4-4e61-aefd-402a3d189b66 node DatanodeRegistration(127.0.0.1:56219, datanodeUuid=df30d679-96d3-4692-b684-a43b060adbff, infoPort=56221, infoSecurePort=0, ipcPort=56222, storageInfo=lv=-56;cid=testClusterID;nsid=770281502;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-08-10 15:45:01,196 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:01,199 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:01,494 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 7 2016-08-10 15:45:01,907 INFO [main] util.FSUtils(749): Created version file at hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 with version=8 2016-08-10 15:45:02,536 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-10 15:45:03,212 INFO [main] client.ConnectionUtils(106): master//10.22.16.34:0 server-side HConnection retries=350 2016-08-10 15:45:03,299 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-10 15:45:03,355 INFO [main] ipc.RpcServer$Listener(635): master//10.22.16.34:0: started 3 reader(s) listening on port=56226 2016-08-10 15:45:03,539 INFO [main] hfile.CacheConfig(548): Allocating LruBlockCache size=995.60 MB, blockSize=64 KB 2016-08-10 15:45:03,567 DEBUG [main] hfile.CacheConfig(562): Trying to use Internal l2 cache 2016-08-10 15:45:03,567 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:03,568 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:03,589 INFO [main] mob.MobFileCache(121): MobFileCache is initialized, and the cache size is 1000 2016-08-10 15:45:03,592 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:03,766 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=master:56226 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:03,796 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:562260x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:03,799 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): master:56226-0x15676a151160000 connected 2016-08-10 15:45:03,898 DEBUG [main] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-08-10 15:45:03,899 DEBUG [main] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-10 15:45:03,902 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-10 15:45:03,902 INFO [RpcServer.listener,port=56226] ipc.RpcServer$Listener(769): RpcServer.listener,port=56226: starting 2016-08-10 15:45:03,903 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-10 15:45:03,904 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-10 15:45:03,904 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-10 15:45:03,904 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-10 15:45:03,904 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-10 15:45:03,904 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-10 15:45:03,905 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-10 15:45:03,905 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-10 15:45:03,905 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-10 15:45:03,905 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-10 15:45:03,905 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-10 15:45:03,906 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-10 15:45:03,906 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-10 15:45:03,962 INFO [main] master.HMaster(397): hbase.rootdir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9, hbase.cluster.distributed=false 2016-08-10 15:45:04,046 DEBUG [main] impl.BackupManager(134): Added log cleaner: org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-10 15:45:04,046 DEBUG [main] impl.BackupManager(135): Added master procedure manager: org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager 2016-08-10 15:45:04,047 DEBUG [main] impl.BackupManager(136): Added master observer: org.apache.hadoop.hbase.backup.master.BackupController 2016-08-10 15:45:04,078 INFO [main] master.HMaster(1719): Adding backup master ZNode /1/backup-masters/10.22.16.34,56226,1470869103454 2016-08-10 15:45:04,096 DEBUG [main] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/backup-masters/10.22.16.34,56226,1470869103454 2016-08-10 15:45:04,102 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/master 2016-08-10 15:45:04,103 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/master 2016-08-10 15:45:04,104 INFO [10.22.16.34:56226.activeMasterManager] master.ActiveMasterManager(170): Deleting ZNode for /1/backup-masters/10.22.16.34,56226,1470869103454 from backup master directory 2016-08-10 15:45:04,105 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/master 2016-08-10 15:45:04,105 DEBUG [main-EventThread] master.ActiveMasterManager(126): A master is now available 2016-08-10 15:45:04,106 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/backup-masters/10.22.16.34,56226,1470869103454 2016-08-10 15:45:04,123 WARN [10.22.16.34:56226.activeMasterManager] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-10 15:45:04,123 INFO [10.22.16.34:56226.activeMasterManager] master.ActiveMasterManager(179): Registered Active Master=10.22.16.34,56226,1470869103454 2016-08-10 15:45:04,156 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-10 15:45:04,158 INFO [main] client.ConnectionUtils(106): regionserver//10.22.16.34:0 server-side HConnection retries=350 2016-08-10 15:45:04,158 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-10 15:45:04,160 INFO [main] ipc.RpcServer$Listener(635): regionserver//10.22.16.34:0: started 3 reader(s) listening on port=56228 2016-08-10 15:45:04,167 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:04,168 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:04,170 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:04,172 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:56228 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:04,174 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:562280x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:04,175 DEBUG [main] zookeeper.ZKUtil(365): regionserver:562280x0, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/master 2016-08-10 15:45:04,176 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): regionserver:56228-0x15676a151160001 connected 2016-08-10 15:45:04,176 DEBUG [main] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-10 15:45:04,176 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-10 15:45:04,176 INFO [RpcServer.listener,port=56228] ipc.RpcServer$Listener(769): RpcServer.listener,port=56228: starting 2016-08-10 15:45:04,176 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-10 15:45:04,177 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-10 15:45:04,177 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-10 15:45:04,178 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-10 15:45:04,178 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-10 15:45:04,178 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-10 15:45:04,178 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-10 15:45:04,179 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-10 15:45:04,179 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-10 15:45:04,179 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-10 15:45:04,179 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-10 15:45:04,179 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-10 15:45:04,180 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-10 15:45:04,241 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:04,244 DEBUG [10.22.16.34:56226.activeMasterManager] util.FSUtils(901): Created cluster ID file at hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/hbase.id with ID: 1e898bd8-136b-4246-af28-e1914be41b82 2016-08-10 15:45:04,375 INFO [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x289359b6 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:04,375 INFO [M:0;10.22.16.34:56226] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x61e6d089 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:04,378 DEBUG [RS:0;10.22.16.34:56228-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x289359b60x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:04,379 DEBUG [M:0;10.22.16.34:56226-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x61e6d0890x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:04,379 INFO [RS:0;10.22.16.34:56228] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-10 15:45:04,379 DEBUG [RS:0;10.22.16.34:56228] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-10 15:45:04,379 INFO [M:0;10.22.16.34:56226] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-10 15:45:04,379 DEBUG [M:0;10.22.16.34:56226] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-10 15:45:04,379 DEBUG [RS:0;10.22.16.34:56228-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x289359b6-0x15676a151160002 connected 2016-08-10 15:45:04,380 DEBUG [M:0;10.22.16.34:56226-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x61e6d089-0x15676a151160003 connected 2016-08-10 15:45:04,437 DEBUG [M:0;10.22.16.34:56226] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2fffbd21, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:04,437 DEBUG [RS:0;10.22.16.34:56228] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ed9da25, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:04,437 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:04,437 DEBUG [RS:0;10.22.16.34:56228] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:04,438 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(138): Create NioEventLoopGroup with maxThreads = 0 2016-08-10 15:45:04,440 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(113): Create global event loop group NioEventLoopGroup 2016-08-10 15:45:04,440 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:04,440 DEBUG [RS:0;10.22.16.34:56228] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:04,449 INFO [10.22.16.34:56226.activeMasterManager] master.MasterFileSystem(528): BOOTSTRAP: creating hbase:meta region 2016-08-10 15:45:04,461 INFO [10.22.16.34:56226.activeMasterManager] regionserver.HRegion(6162): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, {NAME => 'table', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 Table name == hbase:meta 2016-08-10 15:45:04,554 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:04,559 DEBUG [10.22.16.34:56226.activeMasterManager] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-10 15:45:04,748 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:04,821 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:04,832 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/info 2016-08-10 15:45:04,854 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:04,855 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:04,857 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/table 2016-08-10 15:45:04,901 DEBUG [10.22.16.34:56226.activeMasterManager] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740 2016-08-10 15:45:04,947 DEBUG [10.22.16.34:56226.activeMasterManager] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-10 15:45:04,958 DEBUG [10.22.16.34:56226.activeMasterManager] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:04,958 INFO [10.22.16.34:56226.activeMasterManager] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=2 2016-08-10 15:45:04,958 DEBUG [10.22.16.34:56226.activeMasterManager] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-10 15:45:04,958 DEBUG [10.22.16.34:56226.activeMasterManager] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-10 15:45:04,960 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-10 15:45:04,960 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-10 15:45:04,960 INFO [10.22.16.34:56226.activeMasterManager] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-10 15:45:05,017 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:05,021 DEBUG [10.22.16.34:56226.activeMasterManager] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:05,032 INFO [10.22.16.34:56226.activeMasterManager] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:05,061 INFO [10.22.16.34:56226.activeMasterManager] coordination.ZKSplitLogManagerCoordination(599): Found 0 orphan tasks and 0 rescan nodes 2016-08-10 15:45:05,061 DEBUG [10.22.16.34:56226.activeMasterManager] util.FSTableDescriptors(222): Fetching table descriptors from the filesystem. 2016-08-10 15:45:05,233 INFO [10.22.16.34:56226.activeMasterManager] balancer.StochasticLoadBalancer(156): loading config 2016-08-10 15:45:05,281 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/balancer 2016-08-10 15:45:05,290 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/normalizer 2016-08-10 15:45:05,297 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/split 2016-08-10 15:45:05,297 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/merge 2016-08-10 15:45:05,425 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-08-10 15:45:05,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-08-10 15:45:05,426 INFO [10.22.16.34:56226.activeMasterManager] master.HMaster(620): Server active/primary master=10.22.16.34,56226,1470869103454, sessionid=0x15676a151160000, setting cluster-up flag (Was=false) 2016-08-10 15:45:05,428 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(813): ClusterId : 1e898bd8-136b-4246-af28-e1914be41b82 2016-08-10 15:45:05,428 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(813): ClusterId : 1e898bd8-136b-4246-af28-e1914be41b82 2016-08-10 15:45:05,459 INFO [RS:0;10.22.16.34:56228] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-10 15:45:05,459 INFO [M:0;10.22.16.34:56226] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-10 15:45:05,474 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-10 15:45:05,474 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-10 15:45:05,506 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc already exists 2016-08-10 15:45:05,507 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc/acquired already exists 2016-08-10 15:45:05,508 INFO [10.22.16.34:56226.activeMasterManager] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager was loaded successfully. 2016-08-10 15:45:05,510 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc/abort already exists 2016-08-10 15:45:05,512 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-10 15:45:05,512 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-10 15:45:05,512 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-10 15:45:05,513 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-10 15:45:05,514 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/online-snapshot already exists 2016-08-10 15:45:05,515 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/online-snapshot/acquired already exists 2016-08-10 15:45:05,516 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/online-snapshot/reached already exists 2016-08-10 15:45:05,516 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/online-snapshot/abort already exists 2016-08-10 15:45:05,517 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-10 15:45:05,517 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-10 15:45:05,517 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-10 15:45:05,517 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-10 15:45:05,518 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc already exists 2016-08-10 15:45:05,519 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/acquired already exists 2016-08-10 15:45:05,520 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/reached already exists 2016-08-10 15:45:05,521 DEBUG [RS:0;10.22.16.34:56228] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/abort already exists 2016-08-10 15:45:05,521 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-10 15:45:05,521 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-10 15:45:05,548 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/online-snapshot/acquired already exists 2016-08-10 15:45:05,550 INFO [10.22.16.34:56226.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-10 15:45:05,551 INFO [M:0;10.22.16.34:56226] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-10 15:45:05,551 INFO [RS:0;10.22.16.34:56228] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-10 15:45:05,552 DEBUG [10.22.16.34:56226.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.16.34,56226,1470869103454 2016-08-10 15:45:05,568 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc/acquired already exists 2016-08-10 15:45:05,570 INFO [10.22.16.34:56226.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-10 15:45:05,572 DEBUG [10.22.16.34:56226.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.16.34,56226,1470869103454 2016-08-10 15:45:05,573 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/acquired already exists 2016-08-10 15:45:05,574 INFO [10.22.16.34:56226.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/flush-table-proc/acquired /1/flush-table-proc/reached /1/flush-table-proc/abort 2016-08-10 15:45:05,576 DEBUG [10.22.16.34:56226.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.16.34,56226,1470869103454 2016-08-10 15:45:05,602 INFO [RS:0;10.22.16.34:56228] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-10 15:45:05,602 INFO [M:0;10.22.16.34:56226] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-10 15:45:05,603 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-10 15:45:05,603 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-10 15:45:05,620 DEBUG [RS:0;10.22.16.34:56228] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d0d56ce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.16.34/10.22.16.34:0 2016-08-10 15:45:05,620 DEBUG [RS:0;10.22.16.34:56228] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:05,620 DEBUG [RS:0;10.22.16.34:56228] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:05,620 DEBUG [M:0;10.22.16.34:56226] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c42e9f3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.16.34/10.22.16.34:0 2016-08-10 15:45:05,620 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:05,620 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:05,631 DEBUG [M:0;10.22.16.34:56226] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:M:0;10.22.16.34:56226 2016-08-10 15:45:05,631 DEBUG [RS:0;10.22.16.34:56228] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;10.22.16.34:56228 2016-08-10 15:45:05,653 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-10 15:45:05,654 DEBUG [RS:0;10.22.16.34:56228] zookeeper.ZKUtil(365): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:45:05,654 DEBUG [M:0;10.22.16.34:56226] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56226,1470869103454 2016-08-10 15:45:05,655 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:45:05,655 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56226,1470869103454 2016-08-10 15:45:05,656 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:45:05,657 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.22.16.34,56226,1470869103454 2016-08-10 15:45:05,667 INFO [10.22.16.34:56226.activeMasterManager] master.MasterCoprocessorHost(91): System coprocessor loading is enabled 2016-08-10 15:45:05,669 INFO [RS:0;10.22.16.34:56228] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-10 15:45:05,669 INFO [M:0;10.22.16.34:56226] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-10 15:45:05,670 INFO [M:0;10.22.16.34:56226] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-10 15:45:05,670 INFO [RS:0;10.22.16.34:56228] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-10 15:45:05,670 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56226,1470869103454 with port=56226, startcode=1470869103454 2016-08-10 15:45:05,671 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56226,1470869103454 with port=56228, startcode=1470869104167 2016-08-10 15:45:05,672 DEBUG [M:0;10.22.16.34:56226] regionserver.HRegionServer(2358): Master is not running yet 2016-08-10 15:45:05,673 WARN [M:0;10.22.16.34:56226] regionserver.HRegionServer(941): reportForDuty failed; sleeping and then retrying. 2016-08-10 15:45:05,678 INFO [10.22.16.34:56226.activeMasterManager] coprocessor.CoprocessorHost(161): System coprocessor org.apache.hadoop.hbase.backup.master.BackupController was loaded successfully with priority (536870911). 2016-08-10 15:45:05,693 DEBUG [10.22.16.34:56226.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-10.22.16.34:56226, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:05,693 DEBUG [10.22.16.34:56226.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-10.22.16.34:56226, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:05,693 DEBUG [10.22.16.34:56226.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-10.22.16.34:56226, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:05,693 DEBUG [10.22.16.34:56226.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-10.22.16.34:56226, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:05,694 DEBUG [10.22.16.34:56226.activeMasterManager] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-10.22.16.34:56226, corePoolSize=10, maxPoolSize=10 2016-08-10 15:45:05,694 DEBUG [10.22.16.34:56226.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-10.22.16.34:56226, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:05,838 INFO [10.22.16.34:56226.activeMasterManager] procedure2.ProcedureExecutor(487): Starting procedure executor threads=9 2016-08-10 15:45:05,839 INFO [10.22.16.34:56226.activeMasterManager] wal.WALProcedureStore(296): Starting WAL Procedure Store lease recovery 2016-08-10 15:45:05,841 WARN [10.22.16.34:56226.activeMasterManager] wal.WALProcedureStore(941): Log directory not found: File hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/MasterProcWALs does not exist. 2016-08-10 15:45:05,858 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-08-10 15:45:05,859 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56236; # active connections: 1 2016-08-10 15:45:05,864 DEBUG [10.22.16.34:56226.activeMasterManager] wal.WALProcedureStore(833): Roll new state log: 1 2016-08-10 15:45:05,866 INFO [10.22.16.34:56226.activeMasterManager] wal.WALProcedureStore(319): Lease acquired for flushLogId: 1 2016-08-10 15:45:05,867 DEBUG [10.22.16.34:56226.activeMasterManager] wal.WALProcedureStore(336): No state logs to replay. 2016-08-10 15:45:05,867 DEBUG [10.22.16.34:56226.activeMasterManager] procedure2.ProcedureExecutor$1(298): load procedures maxProcId=0 2016-08-10 15:45:05,878 DEBUG [10.22.16.34:56226.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-10 15:45:05,879 DEBUG [10.22.16.34:56226.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2016-08-10 15:45:05,879 INFO [10.22.16.34:56226.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:05,882 DEBUG [10.22.16.34:56226.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:05,883 DEBUG [10.22.16.34:56226.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(674): replicationLogCleaner-0x15676a151160004 connected 2016-08-10 15:45:05,890 DEBUG [10.22.16.34:56226.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2016-08-10 15:45:05,892 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu.hfs.0 (auth:SIMPLE) 2016-08-10 15:45:05,892 DEBUG [10.22.16.34:56226.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2016-08-10 15:45:05,895 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56236 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:05,897 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] ipc.CallRunner(112): B.defaultRpcServer.handler=0,queue=0,port=56226: callId: 0 service: RegionServerStatusService methodName: RegionServerStartup size: 45 connection: 10.22.16.34:56236 org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2295) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8615) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:45:05,901 DEBUG [10.22.16.34:56226.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2016-08-10 15:45:05,902 DEBUG [10.22.16.34:56226.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2016-08-10 15:45:05,902 INFO [10.22.16.34:56226.activeMasterManager] master.ServerManager(1008): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of 1500 ms. 2016-08-10 15:45:05,902 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56226,1470869103454 with port=56226, startcode=1470869103454 2016-08-10 15:45:05,908 DEBUG [RS:0;10.22.16.34:56228] regionserver.HRegionServer(2358): Master is not running yet 2016-08-10 15:45:05,908 WARN [RS:0;10.22.16.34:56228] regionserver.HRegionServer(941): reportForDuty failed; sleeping and then retrying. 2016-08-10 15:45:05,956 INFO [M:0;10.22.16.34:56226] master.ServerManager(456): Registering server=10.22.16.34,56226,1470869103454 2016-08-10 15:45:05,969 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 2016-08-10 15:45:05,969 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:56218 2016-08-10 15:45:05,969 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-10 15:45:05,969 WARN [M:0;10.22.16.34:56226] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-10 15:45:05,969 INFO [M:0;10.22.16.34:56226] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:05,970 DEBUG [M:0;10.22.16.34:56226] regionserver.HRegionServer(1654): logdir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,010 INFO [10.22.16.34:56226.activeMasterManager] master.ServerManager(1025): Finished waiting for region servers count to settle; checked in 1, slept for 108 ms, expecting minimum of 1, maximum of 1, master is running 2016-08-10 15:45:06,010 INFO [10.22.16.34:56226.activeMasterManager] master.ServerManager(456): Registering server=10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,010 INFO [10.22.16.34:56226.activeMasterManager] master.HMaster(710): Registered server found up in zk but who has not yet reported in: 10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,021 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(624): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Unable to get data of znode /1/meta-region-server because node does not exist (not an error) 2016-08-10 15:45:06,042 DEBUG [M:0;10.22.16.34:56226] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-10 15:45:06,092 INFO [M:0;10.22.16.34:56226] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-10 15:45:06,097 INFO [M:0;10.22.16.34:56226] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-10 15:45:06,139 INFO [M:0;10.22.16.34:56226] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-10 15:45:06,156 DEBUG [M:0;10.22.16.34:56226] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.16.34:56226, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:06,156 DEBUG [M:0;10.22.16.34:56226] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.16.34:56226, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:06,156 DEBUG [M:0;10.22.16.34:56226] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.16.34:56226, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:06,156 DEBUG [M:0;10.22.16.34:56226] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.16.34:56226, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:06,157 DEBUG [M:0;10.22.16.34:56226] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.16.34:56226, corePoolSize=2, maxPoolSize=2 2016-08-10 15:45:06,157 DEBUG [M:0;10.22.16.34:56226] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56226, corePoolSize=10, maxPoolSize=10 2016-08-10 15:45:06,157 DEBUG [M:0;10.22.16.34:56226] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.16.34:56226, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:06,160 DEBUG [M:0;10.22.16.34:56226] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,160 DEBUG [M:0;10.22.16.34:56226] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,160 INFO [M:0;10.22.16.34:56226] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.16.34,56226,1470869103454] other RSs: [10.22.16.34,56228,1470869104167, 10.22.16.34,56226,1470869103454] 2016-08-10 15:45:06,249 INFO [SplitLogWorker-10.22.16.34:56226] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.16.34,56226,1470869103454 starting 2016-08-10 15:45:06,269 INFO [M:0;10.22.16.34:56226] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-10 15:45:06,283 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1412): Serving as 10.22.16.34,56226,1470869103454, RpcServer on 10.22.16.34/10.22.16.34:56226, sessionid=0x15676a151160000 2016-08-10 15:45:06,283 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-10 15:45:06,284 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56226,1470869103454' 2016-08-10 15:45:06,284 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:45:06,285 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:45:06,285 INFO [M:0;10.22.16.34:56226] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-10 15:45:06,285 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-10 15:45:06,285 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-10 15:45:06,286 DEBUG [M:0;10.22.16.34:56226] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,286 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56226,1470869103454' 2016-08-10 15:45:06,286 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:06,286 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:06,287 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-10 15:45:06,287 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-10 15:45:06,287 DEBUG [M:0;10.22.16.34:56226] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,287 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56226,1470869103454' 2016-08-10 15:45:06,287 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-08-10 15:45:06,288 DEBUG [M:0;10.22.16.34:56226] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-08-10 15:45:06,288 DEBUG [M:0;10.22.16.34:56226] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-10 15:45:06,307 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(624): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Unable to get data of znode /1/meta-region-server because node does not exist (not an error) 2016-08-10 15:45:06,307 INFO [10.22.16.34:56226.activeMasterManager] master.HMaster(938): Re-assigning hbase:meta with replicaId, 0 it was on null 2016-08-10 15:45:06,325 DEBUG [10.22.16.34:56226.activeMasterManager] master.AssignmentManager(1291): No previous transition plan found (or ignoring an existing plan) for hbase:meta,,1.1588230740; generated random plan=hri=hbase:meta,,1.1588230740, src=, dest=10.22.16.34,56226,1470869103454; 2 (online=2) available servers, forceNewPlan=false 2016-08-10 15:45:06,325 INFO [10.22.16.34:56226.activeMasterManager] master.AssignmentManager(1080): Assigning hbase:meta,,1.1588230740 to 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,326 INFO [10.22.16.34:56226.activeMasterManager] master.RegionStates(1106): Transition {1588230740 state=OFFLINE, ts=1470869106307, server=null} to {1588230740 state=PENDING_OPEN, ts=1470869106326, server=10.22.16.34,56226,1470869103454} 2016-08-10 15:45:06,326 INFO [10.22.16.34:56226.activeMasterManager] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,329 INFO [M:0;10.22.16.34:56226] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-10 15:45:06,333 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.MetaTableLocator(451): META region location doesn't exist, create it 2016-08-10 15:45:06,335 DEBUG [10.22.16.34:56226.activeMasterManager] master.ServerManager(934): New admin connection to 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,408 INFO [10.22.16.34:56226.activeMasterManager] regionserver.RSRpcServices(1666): Open hbase:meta,,1.1588230740 2016-08-10 15:45:06,413 INFO [RS_OPEN_META-10.22.16.34:56226-0] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-10 15:45:06,413 INFO [RS_OPEN_META-10.22.16.34:56226-0] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-10 15:45:06,429 INFO [RS_OPEN_META-10.22.16.34:56226-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0, suffix=, logDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta, archiveDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:45:06,456 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:06,464 INFO [RS_OPEN_META-10.22.16.34:56226-0] wal.FSHLog(1434): Slow sync cost: 7 ms, current pipeline: [] 2016-08-10 15:45:06,465 INFO [RS_OPEN_META-10.22.16.34:56226-0] wal.FSHLog(889): New WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:06,488 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] regionserver.HRegion(6339): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:06,536 DEBUG [10.22.16.34:56226.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869106470,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-10 15:45:06,564 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] coprocessor.CoprocessorHost(181): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2016-08-10 15:45:06,577 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] regionserver.HRegion(7445): Registered coprocessor service: region=hbase:meta,,1 service=hbase.pb.MultiRowMutationService 2016-08-10 15:45:06,582 INFO [RS_OPEN_META-10.22.16.34:56226-0] regionserver.RegionCoprocessorHost(376): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2016-08-10 15:45:06,627 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table meta 1588230740 2016-08-10 15:45:06,627 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-10 15:45:06,647 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:06,648 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:06,649 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/info 2016-08-10 15:45:06,651 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:06,652 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:06,653 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/table 2016-08-10 15:45:06,658 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740 2016-08-10 15:45:06,661 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-10 15:45:06,668 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/recovered.edits/3.seqid to file, newSeqId=3, maxSeqId=2 2016-08-10 15:45:06,670 INFO [RS_OPEN_META-10.22.16.34:56226-0] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=3 2016-08-10 15:45:06,719 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:06,724 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:meta,,1.1588230740 2016-08-10 15:45:06,732 DEBUG [PostOpenDeployTasks:1588230740] master.AssignmentManager(2884): Got transition OPENED for {1588230740 state=PENDING_OPEN, ts=1470869106326, server=10.22.16.34,56226,1470869103454} from 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,732 INFO [PostOpenDeployTasks:1588230740] master.RegionStates(1106): Transition {1588230740 state=PENDING_OPEN, ts=1470869106326, server=10.22.16.34,56226,1470869103454} to {1588230740 state=OPEN, ts=1470869106732, server=10.22.16.34,56226,1470869103454} 2016-08-10 15:45:06,732 INFO [PostOpenDeployTasks:1588230740] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,735 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/meta-region-server 2016-08-10 15:45:06,735 DEBUG [PostOpenDeployTasks:1588230740] master.RegionStates(452): Onlined 1588230740 on 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,737 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:meta,,1.1588230740 2016-08-10 15:45:06,738 DEBUG [RS_OPEN_META-10.22.16.34:56226-0] handler.OpenRegionHandler(126): Opened hbase:meta,,1.1588230740 on 10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,913 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56226,1470869103454 with port=56228, startcode=1470869104167 2016-08-10 15:45:06,917 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.ServerManager(456): Registering server=10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,919 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 2016-08-10 15:45:06,919 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:56218 2016-08-10 15:45:06,919 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-10 15:45:06,919 WARN [RS:0;10.22.16.34:56228] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-10 15:45:06,919 INFO [RS:0;10.22.16.34:56228] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:06,920 DEBUG [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1654): logdir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,925 DEBUG [RS:0;10.22.16.34:56228] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-10 15:45:06,926 INFO [RS:0;10.22.16.34:56228] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-10 15:45:06,926 INFO [RS:0;10.22.16.34:56228] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-10 15:45:06,926 INFO [RS:0;10.22.16.34:56228] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-10 15:45:06,927 DEBUG [RS:0;10.22.16.34:56228] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.16.34:56228, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:06,928 DEBUG [RS:0;10.22.16.34:56228] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.16.34:56228, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:06,928 DEBUG [RS:0;10.22.16.34:56228] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.16.34:56228, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:06,928 DEBUG [RS:0;10.22.16.34:56228] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.16.34:56228, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:06,928 DEBUG [RS:0;10.22.16.34:56228] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.16.34:56228, corePoolSize=2, maxPoolSize=2 2016-08-10 15:45:06,928 DEBUG [RS:0;10.22.16.34:56228] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228, corePoolSize=10, maxPoolSize=10 2016-08-10 15:45:06,928 DEBUG [RS:0;10.22.16.34:56228] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.16.34:56228, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:06,930 DEBUG [RS:0;10.22.16.34:56228] zookeeper.ZKUtil(365): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,930 DEBUG [RS:0;10.22.16.34:56228] zookeeper.ZKUtil(365): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.16.34,56226,1470869103454 2016-08-10 15:45:06,930 INFO [RS:0;10.22.16.34:56228] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.16.34,56228,1470869104167, 10.22.16.34,56226,1470869103454] other RSs: [10.22.16.34,56228,1470869104167, 10.22.16.34,56226,1470869103454] 2016-08-10 15:45:06,967 INFO [RS:0;10.22.16.34:56228] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-10 15:45:06,967 INFO [SplitLogWorker-10.22.16.34:56228] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.16.34,56228,1470869104167 starting 2016-08-10 15:45:06,967 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1412): Serving as 10.22.16.34,56228,1470869104167, RpcServer on 10.22.16.34/10.22.16.34:56228, sessionid=0x15676a151160001 2016-08-10 15:45:06,968 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-10 15:45:06,968 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56228,1470869104167' 2016-08-10 15:45:06,968 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:45:06,968 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:45:06,969 INFO [RS:0;10.22.16.34:56228] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-10 15:45:06,969 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-10 15:45:06,969 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-10 15:45:06,969 DEBUG [RS:0;10.22.16.34:56228] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,970 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56228,1470869104167' 2016-08-10 15:45:06,970 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:06,970 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:06,971 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-10 15:45:06,971 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-10 15:45:06,971 DEBUG [RS:0;10.22.16.34:56228] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.16.34,56228,1470869104167 2016-08-10 15:45:06,971 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56228,1470869104167' 2016-08-10 15:45:06,971 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-08-10 15:45:06,972 DEBUG [RS:0;10.22.16.34:56228] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-08-10 15:45:06,973 DEBUG [RS:0;10.22.16.34:56228] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-10 15:45:06,973 INFO [RS:0;10.22.16.34:56228] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-10 15:45:07,034 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:07,063 INFO [10.22.16.34:56226.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-10 15:45:07,064 DEBUG [10.22.16.34:56226.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869107064,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-10 15:45:07,065 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:07,067 INFO [10.22.16.34:56226.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-10 15:45:07,339 INFO [M:0;10.22.16.34:56226] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56226%2C1470869103454.regiongroup-0, suffix=, logDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454, archiveDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:45:07,342 DEBUG [M:0;10.22.16.34:56226] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:45:07,346 DEBUG [10.22.16.34:56226.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(server crash processing) 2016-08-10 15:45:07,346 INFO [10.22.16.34:56226.activeMasterManager] master.ServerManager(683): AssignmentManager hasn't finished failover cleanup; waiting 2016-08-10 15:45:07,348 INFO [M:0;10.22.16.34:56226] wal.FSHLog(1434): Slow sync cost: 6 ms, current pipeline: [] 2016-08-10 15:45:07,348 INFO [10.22.16.34:56226.activeMasterManager] master.HMaster(965): hbase:meta with replicaId 0 assigned=1, location=10.22.16.34,56226,1470869103454 2016-08-10 15:45:07,349 INFO [M:0;10.22.16.34:56226] wal.FSHLog(889): New WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:45:07,365 INFO [10.22.16.34:56226.activeMasterManager] master.AssignmentManager(555): Clean cluster startup. Don't reassign user regions 2016-08-10 15:45:07,367 INFO [10.22.16.34:56226.activeMasterManager] master.AssignmentManager(425): Joined the cluster in 11ms, failover=false 2016-08-10 15:45:07,370 DEBUG [10.22.16.34:56226.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/info 2016-08-10 15:45:07,371 DEBUG [10.22.16.34:56226.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/table 2016-08-10 15:45:07,489 INFO [10.22.16.34:56226.activeMasterManager] master.TableNamespaceManager(93): Namespace table not found. Creating... 2016-08-10 15:45:07,697 DEBUG [10.22.16.34:56226.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:07,750 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:namespace/write-master:562260000000000 2016-08-10 15:45:07,878 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:07,881 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:07,892 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6162): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == hbase:namespace 2016-08-10 15:45:07,904 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:07,905 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(736): Instantiated hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:45:07,905 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1419): Closing hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884.: disabling compactions & flushes 2016-08-10 15:45:07,905 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:45:07,905 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1552): Closed hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:45:07,985 INFO [RS:0;10.22.16.34:56228] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56228%2C1470869104167.regiongroup-0, suffix=, logDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167, archiveDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:45:07,989 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:45:07,996 INFO [RS:0;10.22.16.34:56228] wal.FSHLog(1434): Slow sync cost: 7 ms, current pipeline: [] 2016-08-10 15:45:07,996 INFO [RS:0;10.22.16.34:56228] wal.FSHLog(889): New WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:45:08,032 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":41}]},"row":"hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884."} 2016-08-10 15:45:08,033 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:08,035 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:08,145 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56226,1470869103454 2016-08-10 15:45:08,147 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table hbase:namespace state org.apache.hadoop.hbase.TableNotFoundException: hbase:namespace at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:08,147 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {c6ed9588ab8edcac411fa2b23646f884 state=OFFLINE, ts=1470869108145, server=null} to {c6ed9588ab8edcac411fa2b23646f884 state=PENDING_OPEN, ts=1470869108147, server=10.22.16.34,56226,1470869103454} 2016-08-10 15:45:08,147 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. with state=PENDING_OPEN, sn=10.22.16.34,56226,1470869103454 2016-08-10 15:45:08,148 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:08,150 INFO [ProcedureExecutor-0] regionserver.RSRpcServices(1666): Open hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:45:08,156 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56226,1470869103454 2016-08-10 15:45:08,157 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869108157,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:namespace"} 2016-08-10 15:45:08,158 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:08,159 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table hbase:namespace state to ENABLED in META 2016-08-10 15:45:08,160 INFO [RS_OPEN_REGION-10.22.16.34:56226-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56226%2C1470869103454.regiongroup-1, suffix=, logDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454, archiveDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:45:08,164 DEBUG [RS_OPEN_REGION-10.22.16.34:56226-0] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:08,169 INFO [RS_OPEN_REGION-10.22.16.34:56226-0] wal.FSHLog(1434): Slow sync cost: 5 ms, current pipeline: [] 2016-08-10 15:45:08,170 INFO [RS_OPEN_REGION-10.22.16.34:56226-0] wal.FSHLog(889): New WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:08,171 DEBUG [RS_OPEN_REGION-10.22.16.34:56226-0] regionserver.HRegion(6339): Opening region: {ENCODED => c6ed9588ab8edcac411fa2b23646f884, NAME => 'hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:08,171 DEBUG [RS_OPEN_REGION-10.22.16.34:56226-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table namespace c6ed9588ab8edcac411fa2b23646f884 2016-08-10 15:45:08,171 DEBUG [RS_OPEN_REGION-10.22.16.34:56226-0] regionserver.HRegion(736): Instantiated hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:45:08,175 INFO [StoreOpener-c6ed9588ab8edcac411fa2b23646f884-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:08,176 INFO [StoreOpener-c6ed9588ab8edcac411fa2b23646f884-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:08,177 DEBUG [StoreOpener-c6ed9588ab8edcac411fa2b23646f884-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/info 2016-08-10 15:45:08,178 DEBUG [RS_OPEN_REGION-10.22.16.34:56226-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884 2016-08-10 15:45:08,185 DEBUG [RS_OPEN_REGION-10.22.16.34:56226-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:08,186 INFO [RS_OPEN_REGION-10.22.16.34:56226-0] regionserver.HRegion(871): Onlined c6ed9588ab8edcac411fa2b23646f884; next sequenceid=2 2016-08-10 15:45:08,186 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:08,187 INFO [PostOpenDeployTasks:c6ed9588ab8edcac411fa2b23646f884] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:45:08,188 DEBUG [PostOpenDeployTasks:c6ed9588ab8edcac411fa2b23646f884] master.AssignmentManager(2884): Got transition OPENED for {c6ed9588ab8edcac411fa2b23646f884 state=PENDING_OPEN, ts=1470869108147, server=10.22.16.34,56226,1470869103454} from 10.22.16.34,56226,1470869103454 2016-08-10 15:45:08,188 INFO [PostOpenDeployTasks:c6ed9588ab8edcac411fa2b23646f884] master.RegionStates(1106): Transition {c6ed9588ab8edcac411fa2b23646f884 state=PENDING_OPEN, ts=1470869108147, server=10.22.16.34,56226,1470869103454} to {c6ed9588ab8edcac411fa2b23646f884 state=OPEN, ts=1470869108188, server=10.22.16.34,56226,1470869103454} 2016-08-10 15:45:08,188 INFO [PostOpenDeployTasks:c6ed9588ab8edcac411fa2b23646f884] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. with state=OPEN, openSeqNum=2, server=10.22.16.34,56226,1470869103454 2016-08-10 15:45:08,188 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:08,190 DEBUG [PostOpenDeployTasks:c6ed9588ab8edcac411fa2b23646f884] master.RegionStates(452): Onlined c6ed9588ab8edcac411fa2b23646f884 on 10.22.16.34,56226,1470869103454 2016-08-10 15:45:08,193 DEBUG [PostOpenDeployTasks:c6ed9588ab8edcac411fa2b23646f884] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:45:08,193 DEBUG [RS_OPEN_REGION-10.22.16.34:56226-0] handler.OpenRegionHandler(126): Opened hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. on 10.22.16.34,56226,1470869103454 2016-08-10 15:45:08,245 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/namespace 2016-08-10 15:45:08,247 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/namespace 2016-08-10 15:45:08,379 DEBUG [10.22.16.34:56226.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:08,478 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:namespace/write-master:562260000000000 2016-08-10 15:45:08,478 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 876msec: CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=FINISHED 2016-08-10 15:45:08,708 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:08,819 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-10 15:45:08,823 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:09,034 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 637msec: CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=FINISHED 2016-08-10 15:45:09,148 DEBUG [10.22.16.34:56226.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:09,365 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:09,472 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-10 15:45:09,474 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:09,474 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-10 15:45:09,687 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 542msec: CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=FINISHED 2016-08-10 15:45:09,695 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/namespace/default already exists 2016-08-10 15:45:09,696 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/default 2016-08-10 15:45:09,696 DEBUG [10.22.16.34:56226.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/namespace/hbase already exists 2016-08-10 15:45:09,697 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/hbase 2016-08-10 15:45:09,698 INFO [10.22.16.34:56226.activeMasterManager] master.HMaster(807): Master has completed initialization 2016-08-10 15:45:09,698 DEBUG [10.22.16.34:56226.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(master initialized) 2016-08-10 15:45:09,704 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x60ab7379 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:09,706 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x60ab73790x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:09,707 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@543ee3ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:09,708 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:09,708 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:09,708 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x60ab7379-0x15676a151160005 connected 2016-08-10 15:45:09,710 INFO [10.22.16.34:56226.activeMasterManager] quotas.MasterQuotaManager(72): Quota support disabled 2016-08-10 15:45:09,711 INFO [10.22.16.34:56226.activeMasterManager] zookeeper.ZooKeeperWatcher(225): not a secure deployment, proceeding 2016-08-10 15:45:09,723 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:09,724 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56248; # active connections: 2 2016-08-10 15:45:09,724 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:09,725 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56248 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:09,782 INFO [main] hbase.HBaseTestingUtility(1089): Minicluster is up 2016-08-10 15:45:09,782 INFO [main] hbase.HBaseTestingUtility(1263): The hbase.fs.tmp.dir is set to /user/tyu/hbase-staging 2016-08-10 15:45:09,782 INFO [main] hbase.HBaseTestingUtility(1013): Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s) 2016-08-10 15:45:09,798 INFO [main] hbase.HBaseTestingUtility(428): System.getProperty("hadoop.log.dir") already set to: /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_logs so I do NOT create it in target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540 2016-08-10 15:45:09,798 WARN [main] hbase.HBaseTestingUtility(432): hadoop.log.dir property value differs in configuration and system: Configuration=/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop-log-dir while System=/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_logs Erasing configuration value by system value. 2016-08-10 15:45:09,798 INFO [main] hbase.HBaseTestingUtility(428): System.getProperty("hadoop.tmp.dir") already set to: /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_tmp so I do NOT create it in target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540 2016-08-10 15:45:09,798 WARN [main] hbase.HBaseTestingUtility(432): hadoop.tmp.dir property value differs in configuration and system: Configuration=/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop-tmp-dir while System=/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_tmp Erasing configuration value by system value. 2016-08-10 15:45:09,798 INFO [main] hbase.HBaseTestingUtility(496): Created new mini-cluster data directory: /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/dfscluster_c8a285b4-f1aa-4075-b261-2da854c81454, deleteOnExit=true 2016-08-10 15:45:09,799 INFO [main] hbase.HBaseTestingUtility(743): Setting test.cache.data to /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/cache_data in system properties and HBase conf 2016-08-10 15:45:09,799 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.tmp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop_tmp in system properties and HBase conf 2016-08-10 15:45:09,799 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.log.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop_logs in system properties and HBase conf 2016-08-10 15:45:09,799 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.local.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/mapred_local in system properties and HBase conf 2016-08-10 15:45:09,799 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.temp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/mapred_temp in system properties and HBase conf 2016-08-10 15:45:09,799 INFO [main] hbase.HBaseTestingUtility(734): read short circuit is OFF 2016-08-10 15:45:09,800 DEBUG [main] fs.HFileSystem(221): The file system is not a DistributedFileSystem. Skipping on block location reordering 2016-08-10 15:45:09,801 INFO [10.22.16.34:56226.activeMasterManager] master.HMaster(1495): Client=null/null create 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} Formatting using clusterid: testClusterID 2016-08-10 15:45:09,840 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:45:09,843 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/hdfs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_56249_hdfs____.4tvpls/webapp 2016-08-10 15:45:09,905 DEBUG [10.22.16.34:56226.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:09,909 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:562260000000000 2016-08-10 15:45:09,910 INFO [10.22.16.34:56226.activeMasterManager] master.BackupController(51): Created hbase:backup table 2016-08-10 15:45:09,922 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:56249 2016-08-10 15:45:10,234 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:10,238 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/hbase/backup/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:10,239 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(6162): creating HRegion hbase:backup HTD == 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == hbase:backup 2016-08-10 15:45:10,251 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:10,252 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(736): Instantiated hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:45:10,252 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1419): Closing hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201.: disabling compactions & flushes 2016-08-10 15:45:10,253 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:45:10,253 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1552): Closed hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:45:10,295 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:45:10,298 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_56254_datanode____.qhluyx/webapp 2016-08-10 15:45:10,363 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":38}]},"row":"hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201."} 2016-08-10 15:45:10,364 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:10,365 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:10,370 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:56254 2016-08-10 15:45:10,449 INFO [IPC Server handler 4 on 56251] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4 node DatanodeRegistration(127.0.0.1:56253, datanodeUuid=8a9680a1-308c-48dd-898f-02613d074ad5, infoPort=56255, infoSecurePort=0, ipcPort=56256, storageInfo=lv=-56;cid=testClusterID;nsid=244454800;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2016-08-10 15:45:10,449 INFO [IPC Server handler 4 on 56251] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df node DatanodeRegistration(127.0.0.1:56253, datanodeUuid=8a9680a1-308c-48dd-898f-02613d074ad5, infoPort=56255, infoSecurePort=0, ipcPort=56256, storageInfo=lv=-56;cid=testClusterID;nsid=244454800;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-08-10 15:45:10,470 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,471 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table hbase:backup state org.apache.hadoop.hbase.TableNotFoundException: hbase:backup at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:10,471 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {bb117bea47747375164e98ce6287a201 state=OFFLINE, ts=1470869110470, server=null} to {bb117bea47747375164e98ce6287a201 state=PENDING_OPEN, ts=1470869110471, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:10,471 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,471 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:10,472 DEBUG [ProcedureExecutor-3] master.ServerManager(934): New admin connection to 10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,483 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service AdminService, sasl=false 2016-08-10 15:45:10,483 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56259; # active connections: 1 2016-08-10 15:45:10,484 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:10,485 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56259 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:10,485 INFO [PriorityRpcServer.handler=1,queue=1,port=56228] regionserver.RSRpcServices(1666): Open hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:45:10,492 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,492 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869110492,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:backup"} 2016-08-10 15:45:10,494 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:10,495 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table hbase:backup state to ENABLED in META 2016-08-10 15:45:10,496 INFO [RS_OPEN_REGION-10.22.16.34:56228-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56228%2C1470869104167.regiongroup-1, suffix=, logDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167, archiveDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:45:10,497 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:10,499 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:10,499 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:10,504 INFO [RS_OPEN_REGION-10.22.16.34:56228-0] wal.FSHLog(1434): Slow sync cost: 5 ms, current pipeline: [] 2016-08-10 15:45:10,505 INFO [RS_OPEN_REGION-10.22.16.34:56228-0] wal.FSHLog(889): New WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:10,506 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(6339): Opening region: {ENCODED => bb117bea47747375164e98ce6287a201, NAME => 'hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:10,507 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table backup bb117bea47747375164e98ce6287a201 2016-08-10 15:45:10,507 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(736): Instantiated hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:45:10,513 INFO [StoreOpener-bb117bea47747375164e98ce6287a201-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,516 INFO [StoreOpener-bb117bea47747375164e98ce6287a201-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:10,518 DEBUG [StoreOpener-bb117bea47747375164e98ce6287a201-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/meta 2016-08-10 15:45:10,518 INFO [IPC Server handler 1 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|FINALIZED]]} size 0 2016-08-10 15:45:10,521 INFO [main] util.FSUtils(749): Created version file at hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57 with version=8 2016-08-10 15:45:10,521 INFO [StoreOpener-bb117bea47747375164e98ce6287a201-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,522 INFO [StoreOpener-bb117bea47747375164e98ce6287a201-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:10,523 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-10 15:45:10,523 DEBUG [StoreOpener-bb117bea47747375164e98ce6287a201-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/session 2016-08-10 15:45:10,524 INFO [main] client.ConnectionUtils(106): master//10.22.16.34:0 server-side HConnection retries=350 2016-08-10 15:45:10,524 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-10 15:45:10,525 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201 2016-08-10 15:45:10,525 INFO [main] ipc.RpcServer$Listener(635): master//10.22.16.34:0: started 3 reader(s) listening on port=56262 2016-08-10 15:45:10,527 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,528 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,528 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:backup, use config (67108864) instead 2016-08-10 15:45:10,529 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:10,532 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=master:56262 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:10,534 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:10,534 INFO [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(871): Onlined bb117bea47747375164e98ce6287a201; next sequenceid=2 2016-08-10 15:45:10,534 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:562620x0, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:10,535 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:10,536 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): master:56262-0x15676a151160006 connected 2016-08-10 15:45:10,536 INFO [PostOpenDeployTasks:bb117bea47747375164e98ce6287a201] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:45:10,538 DEBUG [PriorityRpcServer.handler=1,queue=1,port=56226] master.AssignmentManager(2884): Got transition OPENED for {bb117bea47747375164e98ce6287a201 state=PENDING_OPEN, ts=1470869110471, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,538 INFO [PriorityRpcServer.handler=1,queue=1,port=56226] master.RegionStates(1106): Transition {bb117bea47747375164e98ce6287a201 state=PENDING_OPEN, ts=1470869110471, server=10.22.16.34,56228,1470869104167} to {bb117bea47747375164e98ce6287a201 state=OPEN, ts=1470869110538, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:10,538 INFO [PriorityRpcServer.handler=1,queue=1,port=56226] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,539 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:10,540 DEBUG [PriorityRpcServer.handler=1,queue=1,port=56226] master.RegionStates(452): Onlined bb117bea47747375164e98ce6287a201 on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,542 DEBUG [PostOpenDeployTasks:bb117bea47747375164e98ce6287a201] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:45:10,542 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] handler.OpenRegionHandler(126): Opened hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:10,544 DEBUG [main] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-08-10 15:45:10,544 DEBUG [main] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-10 15:45:10,544 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-10 15:45:10,544 INFO [RpcServer.listener,port=56262] ipc.RpcServer$Listener(769): RpcServer.listener,port=56262: starting 2016-08-10 15:45:10,544 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-10 15:45:10,545 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-10 15:45:10,545 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-10 15:45:10,545 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-10 15:45:10,545 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-10 15:45:10,546 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-10 15:45:10,546 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-10 15:45:10,546 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-10 15:45:10,546 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-10 15:45:10,546 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-10 15:45:10,546 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-10 15:45:10,547 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-10 15:45:10,547 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-10 15:45:10,547 INFO [main] master.HMaster(397): hbase.rootdir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57, hbase.cluster.distributed=false 2016-08-10 15:45:10,548 DEBUG [main] impl.BackupManager(134): Added log cleaner: org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-10 15:45:10,548 DEBUG [main] impl.BackupManager(135): Added master procedure manager: org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager 2016-08-10 15:45:10,548 DEBUG [main] impl.BackupManager(136): Added master observer: org.apache.hadoop.hbase.backup.master.BackupController 2016-08-10 15:45:10,548 INFO [main] master.HMaster(1719): Adding backup master ZNode /2/backup-masters/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,549 DEBUG [main] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/backup-masters/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,550 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/master 2016-08-10 15:45:10,551 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/master 2016-08-10 15:45:10,551 INFO [10.22.16.34:56262.activeMasterManager] master.ActiveMasterManager(170): Deleting ZNode for /2/backup-masters/10.22.16.34,56262,1470869110526 from backup master directory 2016-08-10 15:45:10,552 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/master 2016-08-10 15:45:10,552 DEBUG [main-EventThread] master.ActiveMasterManager(126): A master is now available 2016-08-10 15:45:10,552 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/backup-masters/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,552 WARN [10.22.16.34:56262.activeMasterManager] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-10 15:45:10,553 INFO [10.22.16.34:56262.activeMasterManager] master.ActiveMasterManager(179): Registered Active Master=10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,575 INFO [IPC Server handler 3 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|FINALIZED]]} size 0 2016-08-10 15:45:10,575 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-10 15:45:10,576 INFO [main] client.ConnectionUtils(106): regionserver//10.22.16.34:0 server-side HConnection retries=350 2016-08-10 15:45:10,576 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-10 15:45:10,577 DEBUG [10.22.16.34:56262.activeMasterManager] util.FSUtils(901): Created cluster ID file at hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/hbase.id with ID: a1b8b1e0-d198-4ce1-a718-142ba2b6af6f 2016-08-10 15:45:10,577 INFO [main] ipc.RpcServer$Listener(635): regionserver//10.22.16.34:0: started 3 reader(s) listening on port=56266 2016-08-10 15:45:10,579 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,579 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,581 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:10,582 INFO [10.22.16.34:56262.activeMasterManager] master.MasterFileSystem(528): BOOTSTRAP: creating hbase:meta region 2016-08-10 15:45:10,582 INFO [10.22.16.34:56262.activeMasterManager] regionserver.HRegion(6162): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, {NAME => 'table', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57 Table name == hbase:meta 2016-08-10 15:45:10,583 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:56266 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:10,585 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:562660x0, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:10,586 DEBUG [main] zookeeper.ZKUtil(365): regionserver:562660x0, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/master 2016-08-10 15:45:10,587 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): regionserver:56266-0x15676a151160007 connected 2016-08-10 15:45:10,587 DEBUG [main] zookeeper.ZKUtil(367): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-10 15:45:10,588 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-10 15:45:10,588 INFO [RpcServer.listener,port=56266] ipc.RpcServer$Listener(769): RpcServer.listener,port=56266: starting 2016-08-10 15:45:10,588 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-10 15:45:10,588 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-10 15:45:10,589 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-10 15:45:10,589 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-10 15:45:10,589 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-10 15:45:10,589 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-10 15:45:10,590 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-10 15:45:10,590 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-10 15:45:10,590 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-10 15:45:10,590 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-10 15:45:10,591 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-10 15:45:10,591 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-10 15:45:10,591 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-10 15:45:10,593 INFO [M:0;10.22.16.34:56262] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x41e05a8 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:10,593 INFO [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x688607f3 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:10,596 DEBUG [M:0;10.22.16.34:56262-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x41e05a80x0, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:10,596 INFO [IPC Server handler 4 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|RBW]]} size 0 2016-08-10 15:45:10,596 DEBUG [RS:0;10.22.16.34:56266-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x688607f30x0, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:10,597 INFO [M:0;10.22.16.34:56262] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-10 15:45:10,597 DEBUG [M:0;10.22.16.34:56262] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-10 15:45:10,597 INFO [RS:0;10.22.16.34:56266] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-10 15:45:10,597 DEBUG [RS:0;10.22.16.34:56266] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-10 15:45:10,597 DEBUG [M:0;10.22.16.34:56262] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f9b10d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:10,597 DEBUG [M:0;10.22.16.34:56262] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:10,597 DEBUG [M:0;10.22.16.34:56262] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:10,597 DEBUG [RS:0;10.22.16.34:56266] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66e03308, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:10,597 DEBUG [10.22.16.34:56262.activeMasterManager] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-10 15:45:10,598 DEBUG [M:0;10.22.16.34:56262-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x41e05a8-0x15676a151160008 connected 2016-08-10 15:45:10,598 DEBUG [RS:0;10.22.16.34:56266-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x688607f3-0x15676a151160009 connected 2016-08-10 15:45:10,598 DEBUG [RS:0;10.22.16.34:56266] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:10,598 DEBUG [RS:0;10.22.16.34:56266] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:10,603 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,604 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:10,606 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/info 2016-08-10 15:45:10,609 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,609 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:10,610 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/table 2016-08-10 15:45:10,611 DEBUG [10.22.16.34:56262.activeMasterManager] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740 2016-08-10 15:45:10,614 DEBUG [10.22.16.34:56262.activeMasterManager] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-10 15:45:10,618 DEBUG [10.22.16.34:56262.activeMasterManager] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:10,619 INFO [10.22.16.34:56262.activeMasterManager] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=2 2016-08-10 15:45:10,619 DEBUG [10.22.16.34:56262.activeMasterManager] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-10 15:45:10,619 DEBUG [10.22.16.34:56262.activeMasterManager] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-10 15:45:10,619 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-10 15:45:10,619 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-10 15:45:10,619 INFO [10.22.16.34:56262.activeMasterManager] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-10 15:45:10,630 INFO [IPC Server handler 5 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|RBW]]} size 0 2016-08-10 15:45:10,633 DEBUG [10.22.16.34:56262.activeMasterManager] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:10,639 INFO [10.22.16.34:56262.activeMasterManager] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-10 15:45:10,640 INFO [10.22.16.34:56262.activeMasterManager] coordination.ZKSplitLogManagerCoordination(599): Found 0 orphan tasks and 0 rescan nodes 2016-08-10 15:45:10,641 DEBUG [10.22.16.34:56262.activeMasterManager] util.FSTableDescriptors(222): Fetching table descriptors from the filesystem. 2016-08-10 15:45:10,646 INFO [10.22.16.34:56262.activeMasterManager] balancer.StochasticLoadBalancer(156): loading config 2016-08-10 15:45:10,647 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/balancer 2016-08-10 15:45:10,647 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/normalizer 2016-08-10 15:45:10,649 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/switch/split 2016-08-10 15:45:10,649 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/switch/merge 2016-08-10 15:45:10,651 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2016-08-10 15:45:10,651 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2016-08-10 15:45:10,651 INFO [10.22.16.34:56262.activeMasterManager] master.HMaster(620): Server active/primary master=10.22.16.34,56262,1470869110526, sessionid=0x15676a151160006, setting cluster-up flag (Was=false) 2016-08-10 15:45:10,651 INFO [10.22.16.34:56262.activeMasterManager] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager was loaded successfully. 2016-08-10 15:45:10,652 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(813): ClusterId : a1b8b1e0-d198-4ce1-a718-142ba2b6af6f 2016-08-10 15:45:10,652 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(813): ClusterId : a1b8b1e0-d198-4ce1-a718-142ba2b6af6f 2016-08-10 15:45:10,652 INFO [M:0;10.22.16.34:56262] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-10 15:45:10,652 INFO [RS:0;10.22.16.34:56266] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-10 15:45:10,652 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-10 15:45:10,653 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-10 15:45:10,654 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc already exists 2016-08-10 15:45:10,655 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc/acquired already exists 2016-08-10 15:45:10,656 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc/reached already exists 2016-08-10 15:45:10,656 INFO [10.22.16.34:56262.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/online-snapshot/acquired /2/online-snapshot/reached /2/online-snapshot/abort 2016-08-10 15:45:10,657 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-10 15:45:10,657 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-10 15:45:10,657 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-10 15:45:10,657 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-10 15:45:10,657 DEBUG [M:0;10.22.16.34:56262] zookeeper.RecoverableZooKeeper(594): Node /2/online-snapshot/acquired already exists 2016-08-10 15:45:10,658 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/online-snapshot/acquired already exists 2016-08-10 15:45:10,658 DEBUG [10.22.16.34:56262.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,658 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-10 15:45:10,658 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-10 15:45:10,658 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-10 15:45:10,659 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-10 15:45:10,659 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc/acquired already exists 2016-08-10 15:45:10,660 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc already exists 2016-08-10 15:45:10,660 INFO [10.22.16.34:56262.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/rolllog-proc/acquired /2/rolllog-proc/reached /2/rolllog-proc/abort 2016-08-10 15:45:10,661 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/acquired already exists 2016-08-10 15:45:10,662 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/reached already exists 2016-08-10 15:45:10,662 DEBUG [10.22.16.34:56262.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,662 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-10 15:45:10,662 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/acquired already exists 2016-08-10 15:45:10,663 DEBUG [RS:0;10.22.16.34:56266] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/abort already exists 2016-08-10 15:45:10,663 INFO [M:0;10.22.16.34:56262] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-10 15:45:10,663 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-10 15:45:10,663 INFO [M:0;10.22.16.34:56262] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-10 15:45:10,663 INFO [RS:0;10.22.16.34:56266] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-10 15:45:10,663 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-10 15:45:10,663 INFO [10.22.16.34:56262.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/flush-table-proc/acquired /2/flush-table-proc/reached /2/flush-table-proc/abort 2016-08-10 15:45:10,664 INFO [RS:0;10.22.16.34:56266] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-10 15:45:10,664 DEBUG [M:0;10.22.16.34:56262] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@195bfa4a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.16.34/10.22.16.34:0 2016-08-10 15:45:10,664 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-10 15:45:10,664 DEBUG [M:0;10.22.16.34:56262] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:10,664 DEBUG [M:0;10.22.16.34:56262] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:10,664 DEBUG [RS:0;10.22.16.34:56266] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ec61406, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.16.34/10.22.16.34:0 2016-08-10 15:45:10,664 DEBUG [M:0;10.22.16.34:56262] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:M:0;10.22.16.34:56262 2016-08-10 15:45:10,664 DEBUG [RS:0;10.22.16.34:56266] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:10,664 DEBUG [RS:0;10.22.16.34:56266] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:10,665 DEBUG [RS:0;10.22.16.34:56266] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;10.22.16.34:56266 2016-08-10 15:45:10,665 DEBUG [10.22.16.34:56262.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,665 INFO [10.22.16.34:56262.activeMasterManager] master.MasterCoprocessorHost(91): System coprocessor loading is enabled 2016-08-10 15:45:10,665 INFO [10.22.16.34:56262.activeMasterManager] coprocessor.CoprocessorHost(161): System coprocessor org.apache.hadoop.hbase.backup.master.BackupController was loaded successfully with priority (536870911). 2016-08-10 15:45:10,665 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-10 15:45:10,665 DEBUG [10.22.16.34:56262.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-10.22.16.34:56262, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:10,665 DEBUG [10.22.16.34:56262.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-10.22.16.34:56262, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:10,666 DEBUG [10.22.16.34:56262.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-10.22.16.34:56262, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:10,666 DEBUG [M:0;10.22.16.34:56262] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,666 DEBUG [10.22.16.34:56262.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-10.22.16.34:56262, corePoolSize=5, maxPoolSize=5 2016-08-10 15:45:10,666 INFO [M:0;10.22.16.34:56262] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-10 15:45:10,666 INFO [M:0;10.22.16.34:56262] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-10 15:45:10,666 DEBUG [10.22.16.34:56262.activeMasterManager] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-10.22.16.34:56262, corePoolSize=10, maxPoolSize=10 2016-08-10 15:45:10,666 DEBUG [RS:0;10.22.16.34:56266] zookeeper.ZKUtil(365): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:45:10,666 DEBUG [10.22.16.34:56262.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-10.22.16.34:56262, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:10,666 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56262,1470869110526 with port=56262, startcode=1470869110526 2016-08-10 15:45:10,666 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:45:10,666 INFO [RS:0;10.22.16.34:56266] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-10 15:45:10,666 INFO [10.22.16.34:56262.activeMasterManager] procedure2.ProcedureExecutor(487): Starting procedure executor threads=9 2016-08-10 15:45:10,666 DEBUG [M:0;10.22.16.34:56262] regionserver.HRegionServer(2358): Master is not running yet 2016-08-10 15:45:10,667 WARN [M:0;10.22.16.34:56262] regionserver.HRegionServer(941): reportForDuty failed; sleeping and then retrying. 2016-08-10 15:45:10,667 INFO [10.22.16.34:56262.activeMasterManager] wal.WALProcedureStore(296): Starting WAL Procedure Store lease recovery 2016-08-10 15:45:10,666 INFO [RS:0;10.22.16.34:56266] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-10 15:45:10,667 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56262,1470869110526 with port=56266, startcode=1470869110579 2016-08-10 15:45:10,668 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,668 WARN [10.22.16.34:56262.activeMasterManager] wal.WALProcedureStore(941): Log directory not found: File hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/MasterProcWALs does not exist. 2016-08-10 15:45:10,669 DEBUG [RpcServer.listener,port=56262] ipc.RpcServer$Listener(880): RpcServer.listener,port=56262: connection from 10.22.16.34:56272; # active connections: 1 2016-08-10 15:45:10,668 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-08-10 15:45:10,669 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /2/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:45:10,669 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56262] ipc.RpcServer$Connection(1710): Auth successful for tyu.hfs.1 (auth:SIMPLE) 2016-08-10 15:45:10,670 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /2/rs/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,670 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56262] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56272 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:10,670 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56262] ipc.CallRunner(112): B.defaultRpcServer.handler=0,queue=0,port=56262: callId: 0 service: RegionServerStatusService methodName: RegionServerStartup size: 45 connection: 10.22.16.34:56272 org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2295) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8615) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:45:10,671 DEBUG [RS:0;10.22.16.34:56266] regionserver.HRegionServer(2358): Master is not running yet 2016-08-10 15:45:10,671 WARN [RS:0;10.22.16.34:56266] regionserver.HRegionServer(941): reportForDuty failed; sleeping and then retrying. 2016-08-10 15:45:10,671 DEBUG [10.22.16.34:56262.activeMasterManager] wal.WALProcedureStore(833): Roll new state log: 1 2016-08-10 15:45:10,672 INFO [10.22.16.34:56262.activeMasterManager] wal.WALProcedureStore(319): Lease acquired for flushLogId: 1 2016-08-10 15:45:10,672 DEBUG [10.22.16.34:56262.activeMasterManager] wal.WALProcedureStore(336): No state logs to replay. 2016-08-10 15:45:10,672 DEBUG [10.22.16.34:56262.activeMasterManager] procedure2.ProcedureExecutor$1(298): load procedures maxProcId=0 2016-08-10 15:45:10,672 DEBUG [10.22.16.34:56262.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-10 15:45:10,672 DEBUG [10.22.16.34:56262.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2016-08-10 15:45:10,672 INFO [10.22.16.34:56262.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:10,674 DEBUG [10.22.16.34:56262.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner0x0, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:10,676 DEBUG [10.22.16.34:56262.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(674): replicationLogCleaner-0x15676a15116000a connected 2016-08-10 15:45:10,676 DEBUG [10.22.16.34:56262.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2016-08-10 15:45:10,676 DEBUG [10.22.16.34:56262.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2016-08-10 15:45:10,676 DEBUG [10.22.16.34:56262.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2016-08-10 15:45:10,677 DEBUG [10.22.16.34:56262.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2016-08-10 15:45:10,677 INFO [10.22.16.34:56262.activeMasterManager] master.ServerManager(1008): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of 1500 ms. 2016-08-10 15:45:10,677 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56262,1470869110526 with port=56262, startcode=1470869110526 2016-08-10 15:45:10,677 INFO [M:0;10.22.16.34:56262] master.ServerManager(456): Registering server=10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,677 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57 2016-08-10 15:45:10,677 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:56251 2016-08-10 15:45:10,677 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-10 15:45:10,677 WARN [M:0;10.22.16.34:56262] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-10 15:45:10,677 INFO [M:0;10.22.16.34:56262] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,677 DEBUG [M:0;10.22.16.34:56262] regionserver.HRegionServer(1654): logdir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,681 DEBUG [M:0;10.22.16.34:56262] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-10 15:45:10,681 INFO [M:0;10.22.16.34:56262] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-10 15:45:10,681 INFO [M:0;10.22.16.34:56262] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-10 15:45:10,681 INFO [M:0;10.22.16.34:56262] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-10 15:45:10,682 DEBUG [M:0;10.22.16.34:56262] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.16.34:56262, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:10,682 DEBUG [M:0;10.22.16.34:56262] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.16.34:56262, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:10,682 DEBUG [M:0;10.22.16.34:56262] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.16.34:56262, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:10,683 DEBUG [M:0;10.22.16.34:56262] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.16.34:56262, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:10,683 DEBUG [M:0;10.22.16.34:56262] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.16.34:56262, corePoolSize=2, maxPoolSize=2 2016-08-10 15:45:10,683 DEBUG [M:0;10.22.16.34:56262] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56262, corePoolSize=10, maxPoolSize=10 2016-08-10 15:45:10,683 DEBUG [M:0;10.22.16.34:56262] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.16.34:56262, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:10,684 DEBUG [M:0;10.22.16.34:56262] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:45:10,685 DEBUG [M:0;10.22.16.34:56262] zookeeper.ZKUtil(365): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,685 INFO [M:0;10.22.16.34:56262] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.16.34,56262,1470869110526] other RSs: [10.22.16.34,56266,1470869110579, 10.22.16.34,56262,1470869110526] 2016-08-10 15:45:10,714 INFO [M:0;10.22.16.34:56262] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-10 15:45:10,714 INFO [SplitLogWorker-10.22.16.34:56262] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.16.34,56262,1470869110526 starting 2016-08-10 15:45:10,715 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1412): Serving as 10.22.16.34,56262,1470869110526, RpcServer on 10.22.16.34/10.22.16.34:56262, sessionid=0x15676a151160006 2016-08-10 15:45:10,715 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-10 15:45:10,715 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56262,1470869110526' 2016-08-10 15:45:10,715 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/rolllog-proc/abort' 2016-08-10 15:45:10,715 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/rolllog-proc/acquired' 2016-08-10 15:45:10,716 INFO [M:0;10.22.16.34:56262] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-10 15:45:10,716 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-10 15:45:10,716 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-10 15:45:10,716 DEBUG [M:0;10.22.16.34:56262] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,716 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56262,1470869110526' 2016-08-10 15:45:10,716 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2016-08-10 15:45:10,717 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2016-08-10 15:45:10,717 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-10 15:45:10,717 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-10 15:45:10,717 DEBUG [M:0;10.22.16.34:56262] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,717 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56262,1470869110526' 2016-08-10 15:45:10,717 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/flush-table-proc/abort' 2016-08-10 15:45:10,718 DEBUG [M:0;10.22.16.34:56262] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/flush-table-proc/acquired' 2016-08-10 15:45:10,718 DEBUG [M:0;10.22.16.34:56262] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-10 15:45:10,718 INFO [M:0;10.22.16.34:56262] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-10 15:45:10,727 INFO [10.22.16.34:56262.activeMasterManager] master.ServerManager(1025): Finished waiting for region servers count to settle; checked in 1, slept for 50 ms, expecting minimum of 1, maximum of 1, master is running 2016-08-10 15:45:10,728 INFO [10.22.16.34:56262.activeMasterManager] master.ServerManager(456): Registering server=10.22.16.34,56266,1470869110579 2016-08-10 15:45:10,728 INFO [10.22.16.34:56262.activeMasterManager] master.HMaster(710): Registered server found up in zk but who has not yet reported in: 10.22.16.34,56266,1470869110579 2016-08-10 15:45:10,731 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(624): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Unable to get data of znode /2/meta-region-server because node does not exist (not an error) 2016-08-10 15:45:10,733 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(624): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Unable to get data of znode /2/meta-region-server because node does not exist (not an error) 2016-08-10 15:45:10,733 INFO [10.22.16.34:56262.activeMasterManager] master.HMaster(938): Re-assigning hbase:meta with replicaId, 0 it was on null 2016-08-10 15:45:10,733 DEBUG [10.22.16.34:56262.activeMasterManager] master.AssignmentManager(1291): No previous transition plan found (or ignoring an existing plan) for hbase:meta,,1.1588230740; generated random plan=hri=hbase:meta,,1.1588230740, src=, dest=10.22.16.34,56262,1470869110526; 2 (online=2) available servers, forceNewPlan=false 2016-08-10 15:45:10,733 INFO [10.22.16.34:56262.activeMasterManager] master.AssignmentManager(1080): Assigning hbase:meta,,1.1588230740 to 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,733 INFO [10.22.16.34:56262.activeMasterManager] master.RegionStates(1106): Transition {1588230740 state=OFFLINE, ts=1470869110733, server=null} to {1588230740 state=PENDING_OPEN, ts=1470869110733, server=10.22.16.34,56262,1470869110526} 2016-08-10 15:45:10,733 INFO [10.22.16.34:56262.activeMasterManager] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,734 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.MetaTableLocator(451): META region location doesn't exist, create it 2016-08-10 15:45:10,736 DEBUG [10.22.16.34:56262.activeMasterManager] master.ServerManager(934): New admin connection to 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,736 INFO [10.22.16.34:56262.activeMasterManager] regionserver.RSRpcServices(1666): Open hbase:meta,,1.1588230740 2016-08-10 15:45:10,737 DEBUG [10.22.16.34:56262.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869110737,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-10 15:45:10,737 INFO [RS_OPEN_META-10.22.16.34:56262-0] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-10 15:45:10,737 INFO [RS_OPEN_META-10.22.16.34:56262-0] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-10 15:45:10,742 INFO [RS_OPEN_META-10.22.16.34:56262-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0, suffix=, logDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta, archiveDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:45:10,745 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] wal.FSHLog(665): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:10,749 INFO [RS_OPEN_META-10.22.16.34:56262-0] wal.FSHLog(1434): Slow sync cost: 4 ms, current pipeline: [] 2016-08-10 15:45:10,749 INFO [RS_OPEN_META-10.22.16.34:56262-0] wal.FSHLog(889): New WAL /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:10,750 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] regionserver.HRegion(6339): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:10,750 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] coprocessor.CoprocessorHost(181): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2016-08-10 15:45:10,750 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] regionserver.HRegion(7445): Registered coprocessor service: region=hbase:meta,,1 service=hbase.pb.MultiRowMutationService 2016-08-10 15:45:10,751 INFO [RS_OPEN_META-10.22.16.34:56262-0] regionserver.RegionCoprocessorHost(376): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2016-08-10 15:45:10,751 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table meta 1588230740 2016-08-10 15:45:10,751 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-10 15:45:10,754 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,755 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:10,756 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/info 2016-08-10 15:45:10,757 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:10,758 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:10,759 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/table 2016-08-10 15:45:10,763 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740 2016-08-10 15:45:10,765 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-10 15:45:10,771 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/recovered.edits/3.seqid to file, newSeqId=3, maxSeqId=2 2016-08-10 15:45:10,772 INFO [RS_OPEN_META-10.22.16.34:56262-0] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=3 2016-08-10 15:45:10,772 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:10,773 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:meta,,1.1588230740 2016-08-10 15:45:10,773 DEBUG [PostOpenDeployTasks:1588230740] master.AssignmentManager(2884): Got transition OPENED for {1588230740 state=PENDING_OPEN, ts=1470869110733, server=10.22.16.34,56262,1470869110526} from 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,774 INFO [PostOpenDeployTasks:1588230740] master.RegionStates(1106): Transition {1588230740 state=PENDING_OPEN, ts=1470869110733, server=10.22.16.34,56262,1470869110526} to {1588230740 state=OPEN, ts=1470869110774, server=10.22.16.34,56262,1470869110526} 2016-08-10 15:45:10,774 INFO [PostOpenDeployTasks:1588230740] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,778 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/meta-region-server 2016-08-10 15:45:10,778 DEBUG [PostOpenDeployTasks:1588230740] master.RegionStates(452): Onlined 1588230740 on 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,779 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:meta,,1.1588230740 2016-08-10 15:45:10,779 DEBUG [RS_OPEN_META-10.22.16.34:56262-0] handler.OpenRegionHandler(126): Opened hbase:meta,,1.1588230740 on 10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,813 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:562260000000000 2016-08-10 15:45:10,813 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 906msec: CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=FINISHED 2016-08-10 15:45:10,942 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:10,944 INFO [10.22.16.34:56262.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-10 15:45:10,944 DEBUG [10.22.16.34:56262.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869110944,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-10 15:45:10,945 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:10,947 INFO [10.22.16.34:56262.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-10 15:45:10,949 DEBUG [10.22.16.34:56262.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(server crash processing) 2016-08-10 15:45:10,949 INFO [10.22.16.34:56262.activeMasterManager] master.ServerManager(683): AssignmentManager hasn't finished failover cleanup; waiting 2016-08-10 15:45:10,951 INFO [10.22.16.34:56262.activeMasterManager] master.HMaster(965): hbase:meta with replicaId 0 assigned=1, location=10.22.16.34,56262,1470869110526 2016-08-10 15:45:10,957 INFO [10.22.16.34:56262.activeMasterManager] master.AssignmentManager(555): Clean cluster startup. Don't reassign user regions 2016-08-10 15:45:10,961 INFO [10.22.16.34:56262.activeMasterManager] master.AssignmentManager(425): Joined the cluster in 9ms, failover=false 2016-08-10 15:45:10,963 DEBUG [10.22.16.34:56262.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/info 2016-08-10 15:45:10,963 DEBUG [10.22.16.34:56262.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/table 2016-08-10 15:45:10,964 INFO [10.22.16.34:56262.activeMasterManager] master.TableNamespaceManager(93): Namespace table not found. Creating... 2016-08-10 15:45:11,079 DEBUG [10.22.16.34:56262.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:11,082 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /2/table-lock/hbase:namespace/write-master:562620000000000 2016-08-10 15:45:11,205 INFO [IPC Server handler 0 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|FINALIZED]]} size 0 2016-08-10 15:45:11,208 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:11,209 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6162): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/.tmp Table name == hbase:namespace 2016-08-10 15:45:11,220 INFO [IPC Server handler 9 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|FINALIZED]]} size 0 2016-08-10 15:45:11,221 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(736): Instantiated hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:45:11,221 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1419): Closing hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f.: disabling compactions & flushes 2016-08-10 15:45:11,221 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:45:11,222 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1552): Closed hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:45:11,332 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":41}]},"row":"hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f."} 2016-08-10 15:45:11,335 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:11,337 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:11,446 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,447 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table hbase:namespace state org.apache.hadoop.hbase.TableNotFoundException: hbase:namespace at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:11,448 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {f9abaaef3dbd3930695d90325cf0be0f state=OFFLINE, ts=1470869111446, server=null} to {f9abaaef3dbd3930695d90325cf0be0f state=PENDING_OPEN, ts=1470869111448, server=10.22.16.34,56262,1470869110526} 2016-08-10 15:45:11,448 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. with state=PENDING_OPEN, sn=10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,450 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:11,452 INFO [ProcedureExecutor-0] regionserver.RSRpcServices(1666): Open hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:45:11,457 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,457 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869111457,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:namespace"} 2016-08-10 15:45:11,458 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:11,459 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table hbase:namespace state to ENABLED in META 2016-08-10 15:45:11,461 INFO [RS_OPEN_REGION-10.22.16.34:56262-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56262%2C1470869110526.regiongroup-0, suffix=, logDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526, archiveDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:45:11,464 DEBUG [RS_OPEN_REGION-10.22.16.34:56262-0] wal.FSHLog(665): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:45:11,468 INFO [RS_OPEN_REGION-10.22.16.34:56262-0] wal.FSHLog(1434): Slow sync cost: 4 ms, current pipeline: [] 2016-08-10 15:45:11,468 INFO [RS_OPEN_REGION-10.22.16.34:56262-0] wal.FSHLog(889): New WAL /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:45:11,469 DEBUG [RS_OPEN_REGION-10.22.16.34:56262-0] regionserver.HRegion(6339): Opening region: {ENCODED => f9abaaef3dbd3930695d90325cf0be0f, NAME => 'hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:11,469 DEBUG [RS_OPEN_REGION-10.22.16.34:56262-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table namespace f9abaaef3dbd3930695d90325cf0be0f 2016-08-10 15:45:11,470 DEBUG [RS_OPEN_REGION-10.22.16.34:56262-0] regionserver.HRegion(736): Instantiated hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:45:11,474 INFO [StoreOpener-f9abaaef3dbd3930695d90325cf0be0f-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:11,474 INFO [StoreOpener-f9abaaef3dbd3930695d90325cf0be0f-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:11,475 DEBUG [StoreOpener-f9abaaef3dbd3930695d90325cf0be0f-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/info 2016-08-10 15:45:11,477 DEBUG [RS_OPEN_REGION-10.22.16.34:56262-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f 2016-08-10 15:45:11,483 DEBUG [RS_OPEN_REGION-10.22.16.34:56262-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:11,483 INFO [RS_OPEN_REGION-10.22.16.34:56262-0] regionserver.HRegion(871): Onlined f9abaaef3dbd3930695d90325cf0be0f; next sequenceid=2 2016-08-10 15:45:11,484 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:45:11,485 INFO [PostOpenDeployTasks:f9abaaef3dbd3930695d90325cf0be0f] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:45:11,485 DEBUG [PostOpenDeployTasks:f9abaaef3dbd3930695d90325cf0be0f] master.AssignmentManager(2884): Got transition OPENED for {f9abaaef3dbd3930695d90325cf0be0f state=PENDING_OPEN, ts=1470869111448, server=10.22.16.34,56262,1470869110526} from 10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,485 INFO [PostOpenDeployTasks:f9abaaef3dbd3930695d90325cf0be0f] master.RegionStates(1106): Transition {f9abaaef3dbd3930695d90325cf0be0f state=PENDING_OPEN, ts=1470869111448, server=10.22.16.34,56262,1470869110526} to {f9abaaef3dbd3930695d90325cf0be0f state=OPEN, ts=1470869111485, server=10.22.16.34,56262,1470869110526} 2016-08-10 15:45:11,485 INFO [PostOpenDeployTasks:f9abaaef3dbd3930695d90325cf0be0f] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. with state=OPEN, openSeqNum=2, server=10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,486 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:11,487 DEBUG [PostOpenDeployTasks:f9abaaef3dbd3930695d90325cf0be0f] master.RegionStates(452): Onlined f9abaaef3dbd3930695d90325cf0be0f on 10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,487 DEBUG [PostOpenDeployTasks:f9abaaef3dbd3930695d90325cf0be0f] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:45:11,488 DEBUG [RS_OPEN_REGION-10.22.16.34:56262-0] handler.OpenRegionHandler(126): Opened hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. on 10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,494 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/namespace 2016-08-10 15:45:11,497 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/namespace 2016-08-10 15:45:11,563 DEBUG [10.22.16.34:56262.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:11,675 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(2339): reportForDuty to master=10.22.16.34,56262,1470869110526 with port=56266, startcode=1470869110579 2016-08-10 15:45:11,676 INFO [B.defaultRpcServer.handler=1,queue=0,port=56262] master.ServerManager(456): Registering server=10.22.16.34,56266,1470869110579 2016-08-10 15:45:11,677 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57 2016-08-10 15:45:11,677 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:56251 2016-08-10 15:45:11,677 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-10 15:45:11,677 WARN [RS:0;10.22.16.34:56266] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-10 15:45:11,678 INFO [RS:0;10.22.16.34:56266] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:11,678 DEBUG [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1654): logdir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579 2016-08-10 15:45:11,687 DEBUG [RS:0;10.22.16.34:56266] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-10 15:45:11,687 INFO [RS:0;10.22.16.34:56266] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-10 15:45:11,687 INFO [RS:0;10.22.16.34:56266] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-10 15:45:11,687 INFO [RS:0;10.22.16.34:56266] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-10 15:45:11,688 DEBUG [RS:0;10.22.16.34:56266] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.16.34:56266, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:11,688 DEBUG [RS:0;10.22.16.34:56266] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.16.34:56266, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:11,689 DEBUG [RS:0;10.22.16.34:56266] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.16.34:56266, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:11,689 DEBUG [RS:0;10.22.16.34:56266] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.16.34:56266, corePoolSize=1, maxPoolSize=1 2016-08-10 15:45:11,689 DEBUG [RS:0;10.22.16.34:56266] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.16.34:56266, corePoolSize=2, maxPoolSize=2 2016-08-10 15:45:11,689 DEBUG [RS:0;10.22.16.34:56266] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56266, corePoolSize=10, maxPoolSize=10 2016-08-10 15:45:11,689 DEBUG [RS:0;10.22.16.34:56266] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.16.34:56266, corePoolSize=3, maxPoolSize=3 2016-08-10 15:45:11,691 DEBUG [RS:0;10.22.16.34:56266] zookeeper.ZKUtil(365): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:45:11,691 DEBUG [RS:0;10.22.16.34:56266] zookeeper.ZKUtil(365): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.16.34,56262,1470869110526 2016-08-10 15:45:11,692 INFO [RS:0;10.22.16.34:56266] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.16.34,56266,1470869110579, 10.22.16.34,56262,1470869110526] other RSs: [10.22.16.34,56266,1470869110579, 10.22.16.34,56262,1470869110526] 2016-08-10 15:45:11,722 INFO [RS:0;10.22.16.34:56266] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-10 15:45:11,722 INFO [SplitLogWorker-10.22.16.34:56266] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.16.34,56266,1470869110579 starting 2016-08-10 15:45:11,723 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1412): Serving as 10.22.16.34,56266,1470869110579, RpcServer on 10.22.16.34/10.22.16.34:56266, sessionid=0x15676a151160007 2016-08-10 15:45:11,723 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-10 15:45:11,723 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56266,1470869110579' 2016-08-10 15:45:11,723 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/rolllog-proc/abort' 2016-08-10 15:45:11,723 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/rolllog-proc/acquired' 2016-08-10 15:45:11,724 INFO [RS:0;10.22.16.34:56266] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-10 15:45:11,724 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-10 15:45:11,724 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-10 15:45:11,724 DEBUG [RS:0;10.22.16.34:56266] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.16.34,56266,1470869110579 2016-08-10 15:45:11,724 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56266,1470869110579' 2016-08-10 15:45:11,724 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2016-08-10 15:45:11,724 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2016-08-10 15:45:11,725 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-10 15:45:11,725 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-10 15:45:11,725 DEBUG [RS:0;10.22.16.34:56266] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.16.34,56266,1470869110579 2016-08-10 15:45:11,725 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.16.34,56266,1470869110579' 2016-08-10 15:45:11,725 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/flush-table-proc/abort' 2016-08-10 15:45:11,726 DEBUG [RS:0;10.22.16.34:56266] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/flush-table-proc/acquired' 2016-08-10 15:45:11,726 DEBUG [RS:0;10.22.16.34:56266] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-10 15:45:11,726 INFO [RS:0;10.22.16.34:56266] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-10 15:45:11,726 INFO [M:0;10.22.16.34:56262] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56262%2C1470869110526.regiongroup-1, suffix=, logDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526, archiveDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:45:11,729 DEBUG [M:0;10.22.16.34:56262] wal.FSHLog(665): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-1.1470869111726 2016-08-10 15:45:11,733 INFO [M:0;10.22.16.34:56262] wal.FSHLog(1434): Slow sync cost: 4 ms, current pipeline: [] 2016-08-10 15:45:11,733 INFO [M:0;10.22.16.34:56262] wal.FSHLog(889): New WAL /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-1.1470869111726 2016-08-10 15:45:11,778 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /2/table-lock/hbase:namespace/write-master:562620000000000 2016-08-10 15:45:11,779 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 706msec: CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=FINISHED 2016-08-10 15:45:11,999 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:45:12,107 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/namespace 2016-08-10 15:45:12,109 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:12,138 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2016-08-10 15:45:12,322 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 716msec: CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=FINISHED 2016-08-10 15:45:12,433 DEBUG [10.22.16.34:56262.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:12,651 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:45:12,737 INFO [RS:0;10.22.16.34:56266] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56266%2C1470869110579.regiongroup-0, suffix=, logDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579, archiveDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:45:12,740 DEBUG [RS:0;10.22.16.34:56266] wal.FSHLog(665): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579/10.22.16.34%2C56266%2C1470869110579.regiongroup-0.1470869112737 2016-08-10 15:45:12,746 INFO [RS:0;10.22.16.34:56266] wal.FSHLog(1434): Slow sync cost: 5 ms, current pipeline: [] 2016-08-10 15:45:12,746 INFO [RS:0;10.22.16.34:56266] wal.FSHLog(889): New WAL /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579/10.22.16.34%2C56266%2C1470869110579.regiongroup-0.1470869112737 2016-08-10 15:45:12,758 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/namespace 2016-08-10 15:45:12,760 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:12,761 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-10 15:45:12,973 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 539msec: CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=FINISHED 2016-08-10 15:45:12,985 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/namespace/default already exists 2016-08-10 15:45:12,986 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/namespace/default 2016-08-10 15:45:12,987 DEBUG [10.22.16.34:56262.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/namespace/hbase already exists 2016-08-10 15:45:12,988 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/namespace/hbase 2016-08-10 15:45:12,988 INFO [10.22.16.34:56262.activeMasterManager] master.HMaster(807): Master has completed initialization 2016-08-10 15:45:12,988 DEBUG [10.22.16.34:56262.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(master initialized) 2016-08-10 15:45:12,989 INFO [10.22.16.34:56262.activeMasterManager] quotas.MasterQuotaManager(72): Quota support disabled 2016-08-10 15:45:12,989 INFO [10.22.16.34:56262.activeMasterManager] zookeeper.ZooKeeperWatcher(225): not a secure deployment, proceeding 2016-08-10 15:45:13,004 INFO [10.22.16.34:56262.activeMasterManager] master.HMaster(1495): Client=null/null create 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:45:13,062 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x75d8d503 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:13,067 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x75d8d5030x0, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:13,069 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23959d6f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:13,069 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:13,069 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x75d8d503-0x15676a15116000b connected 2016-08-10 15:45:13,069 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:13,074 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:13,074 DEBUG [RpcServer.listener,port=56262] ipc.RpcServer$Listener(880): RpcServer.listener,port=56262: connection from 10.22.16.34:56283; # active connections: 2 2016-08-10 15:45:13,075 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56262] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:13,075 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56262] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56283 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(1089): Minicluster is up 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(1263): The hbase.fs.tmp.dir is set to /user/tyu/hbase-staging 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(2441): Starting mini mapreduce cluster... 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(743): Setting test.cache.data to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/cache_data in system properties and HBase conf 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.tmp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_tmp in system properties and HBase conf 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.log.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/hadoop_logs in system properties and HBase conf 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.local.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/mapred_local in system properties and HBase conf 2016-08-10 15:45:13,082 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.temp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/mapred_temp in system properties and HBase conf 2016-08-10 15:45:13,083 INFO [main] hbase.HBaseTestingUtility(734): read short circuit is OFF 2016-08-10 15:45:13,107 DEBUG [10.22.16.34:56262.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:13,112 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /2/table-lock/hbase:backup/write-master:562620000000000 2016-08-10 15:45:13,112 INFO [10.22.16.34:56262.activeMasterManager] master.BackupController(51): Created hbase:backup table 2016-08-10 15:45:13,124 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:13,152 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:13,227 INFO [IPC Server handler 6 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741836_1012{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|RBW]]} size 535 2016-08-10 15:45:13,633 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/.tmp/data/hbase/backup/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:13,635 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(6162): creating HRegion hbase:backup HTD == 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/.tmp Table name == hbase:backup 2016-08-10 15:45:13,644 INFO [IPC Server handler 6 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|FINALIZED]]} size 0 2016-08-10 15:45:13,645 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(736): Instantiated hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:45:13,646 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1419): Closing hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e.: disabling compactions & flushes 2016-08-10 15:45:13,646 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:45:13,646 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1552): Closed hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:45:13,705 WARN [main] containermanager.AuxServices(130): The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config. 2016-08-10 15:45:13,757 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":38}]},"row":"hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e."} 2016-08-10 15:45:13,758 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:13,759 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:13,863 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56266,1470869110579 2016-08-10 15:45:13,864 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table hbase:backup state org.apache.hadoop.hbase.TableNotFoundException: hbase:backup at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:13,864 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {5a493dba506f3912b964610f82e9b52e state=OFFLINE, ts=1470869113863, server=null} to {5a493dba506f3912b964610f82e9b52e state=PENDING_OPEN, ts=1470869113864, server=10.22.16.34,56266,1470869110579} 2016-08-10 15:45:13,864 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. with state=PENDING_OPEN, sn=10.22.16.34,56266,1470869110579 2016-08-10 15:45:13,864 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:13,865 DEBUG [ProcedureExecutor-3] master.ServerManager(934): New admin connection to 10.22.16.34,56266,1470869110579 2016-08-10 15:45:13,867 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service AdminService, sasl=false 2016-08-10 15:45:13,867 DEBUG [RpcServer.listener,port=56266] ipc.RpcServer$Listener(880): RpcServer.listener,port=56266: connection from 10.22.16.34:56288; # active connections: 1 2016-08-10 15:45:13,869 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56266] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:13,870 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56266] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56288 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:13,870 INFO [PriorityRpcServer.handler=1,queue=1,port=56266] regionserver.RSRpcServices(1666): Open hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:45:13,875 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56266,1470869110579 2016-08-10 15:45:13,875 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869113875,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:backup"} 2016-08-10 15:45:13,876 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:13,877 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table hbase:backup state to ENABLED in META 2016-08-10 15:45:13,877 INFO [RS_OPEN_REGION-10.22.16.34:56266-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56266%2C1470869110579.regiongroup-1, suffix=, logDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579, archiveDir=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:45:13,880 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] wal.FSHLog(665): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579/10.22.16.34%2C56266%2C1470869110579.regiongroup-1.1470869113877 2016-08-10 15:45:13,885 INFO [RS_OPEN_REGION-10.22.16.34:56266-0] wal.FSHLog(1434): Slow sync cost: 4 ms, current pipeline: [] 2016-08-10 15:45:13,885 INFO [RS_OPEN_REGION-10.22.16.34:56266-0] wal.FSHLog(889): New WAL /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579/10.22.16.34%2C56266%2C1470869110579.regiongroup-1.1470869113877 2016-08-10 15:45:13,886 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] regionserver.HRegion(6339): Opening region: {ENCODED => 5a493dba506f3912b964610f82e9b52e, NAME => 'hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:13,887 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table backup 5a493dba506f3912b964610f82e9b52e 2016-08-10 15:45:13,887 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] regionserver.HRegion(736): Instantiated hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:45:13,892 INFO [StoreOpener-5a493dba506f3912b964610f82e9b52e-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:13,892 INFO [StoreOpener-5a493dba506f3912b964610f82e9b52e-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:13,893 DEBUG [StoreOpener-5a493dba506f3912b964610f82e9b52e-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/meta 2016-08-10 15:45:13,896 INFO [StoreOpener-5a493dba506f3912b964610f82e9b52e-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:13,896 INFO [StoreOpener-5a493dba506f3912b964610f82e9b52e-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:13,897 DEBUG [StoreOpener-5a493dba506f3912b964610f82e9b52e-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/session 2016-08-10 15:45:13,898 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e 2016-08-10 15:45:13,901 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:backup, use config (67108864) instead 2016-08-10 15:45:13,906 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:13,906 INFO [RS_OPEN_REGION-10.22.16.34:56266-0] regionserver.HRegion(871): Onlined 5a493dba506f3912b964610f82e9b52e; next sequenceid=2 2016-08-10 15:45:13,906 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579/10.22.16.34%2C56266%2C1470869110579.regiongroup-1.1470869113877 2016-08-10 15:45:13,907 INFO [PostOpenDeployTasks:5a493dba506f3912b964610f82e9b52e] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:45:13,908 WARN [main] containermanager.AuxServices(130): The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config. 2016-08-10 15:45:13,908 DEBUG [PriorityRpcServer.handler=2,queue=0,port=56262] master.AssignmentManager(2884): Got transition OPENED for {5a493dba506f3912b964610f82e9b52e state=PENDING_OPEN, ts=1470869113864, server=10.22.16.34,56266,1470869110579} from 10.22.16.34,56266,1470869110579 2016-08-10 15:45:13,909 INFO [PriorityRpcServer.handler=2,queue=0,port=56262] master.RegionStates(1106): Transition {5a493dba506f3912b964610f82e9b52e state=PENDING_OPEN, ts=1470869113864, server=10.22.16.34,56266,1470869110579} to {5a493dba506f3912b964610f82e9b52e state=OPEN, ts=1470869113909, server=10.22.16.34,56266,1470869110579} 2016-08-10 15:45:13,909 INFO [PriorityRpcServer.handler=2,queue=0,port=56262] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. with state=OPEN, openSeqNum=2, server=10.22.16.34,56266,1470869110579 2016-08-10 15:45:13,909 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:45:13,910 DEBUG [PriorityRpcServer.handler=2,queue=0,port=56262] master.RegionStates(452): Onlined 5a493dba506f3912b964610f82e9b52e on 10.22.16.34,56266,1470869110579 2016-08-10 15:45:13,911 DEBUG [PostOpenDeployTasks:5a493dba506f3912b964610f82e9b52e] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:45:13,912 DEBUG [RS_OPEN_REGION-10.22.16.34:56266-0] handler.OpenRegionHandler(126): Opened hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. on 10.22.16.34,56266,1470869110579 2016-08-10 15:45:14,199 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /2/table-lock/hbase:backup/write-master:562620000000000 2016-08-10 15:45:14,199 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 1.0820sec: CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=FINISHED 2016-08-10 15:45:19,395 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-nodemanager.properties,hadoop-metrics2.properties 2016-08-10 15:45:24,031 INFO [Thread-445] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:45:24,035 INFO [Thread-445] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.1/hadoop-yarn-common-2.7.1.jar!/webapps/jobhistory to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_56299_jobhistory____.msxy2y/webapp 2016-08-10 15:45:24,081 INFO [Thread-445] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:56299 2016-08-10 15:45:25,638 INFO [RM-0] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:45:25,641 INFO [RM-0] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.1/hadoop-yarn-common-2.7.1.jar!/webapps/cluster to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_56310_cluster____thrq47/webapp Aug 10, 2016 3:45:25 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices as a root resource class Aug 10, 2016 3:45:25 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.hs.webapp.JAXBContextResolver as a provider class Aug 10, 2016 3:45:25 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 10, 2016 3:45:25 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Aug 10, 2016 3:45:25 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.hs.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 10, 2016 3:45:26 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 10, 2016 3:45:26 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices to GuiceManagedComponentProvider with the scope "PerRequest" 2016-08-10 15:45:26,377 INFO [RM-0] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:56310 Aug 10, 2016 3:45:26 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2016-08-10 15:45:27,089 INFO [Thread-643] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:45:27,091 INFO [Thread-643] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.1/hadoop-yarn-common-2.7.1.jar!/webapps/node to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_56315_node____10gv68/webapp Aug 10, 2016 3:45:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class Aug 10, 2016 3:45:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class Aug 10, 2016 3:45:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 10, 2016 3:45:27 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Aug 10, 2016 3:45:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 10, 2016 3:45:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 10, 2016 3:45:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2016-08-10 15:45:27,398 INFO [Thread-643] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:56315 Aug 10, 2016 3:45:27 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2016-08-10 15:45:28,046 INFO [Thread-681] log.Slf4jLog(67): jetty-6.1.26 2016-08-10 15:45:28,049 INFO [Thread-681] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.1/hadoop-yarn-common-2.7.1.jar!/webapps/node to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_56319_node____ni8vys/webapp Aug 10, 2016 3:45:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Aug 10, 2016 3:45:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 10, 2016 3:45:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Aug 10, 2016 3:45:28 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Aug 10, 2016 3:45:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 10, 2016 3:45:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 10, 2016 3:45:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2016-08-10 15:45:28,207 INFO [Thread-681] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:56319 Aug 10, 2016 3:45:28 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2016-08-10 15:45:29,037 INFO [main] hbase.HBaseTestingUtility(2469): Mini mapreduce cluster started 2016-08-10 15:45:29,037 INFO [main] backup.TestBackupBase(110): ROOTDIR hdfs://localhost:56218/backupUT 2016-08-10 15:45:29,037 INFO [main] backup.TestBackupBase(112): REMOTE ROOTDIR hdfs://localhost:56251/backupUT 2016-08-10 15:45:29,051 DEBUG [main] client.ConnectionImplementation(604): Table hbase:backup should be available 2016-08-10 15:45:29,051 DEBUG [main] backup.TestBackupBase(125): backup table exists and available 2016-08-10 15:45:29,104 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:45:29,104 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56320; # active connections: 3 2016-08-10 15:45:29,105 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:29,105 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56320 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:29,114 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.HMaster(2491): Client=tyu//10.22.16.34 creating {NAME => 'ns1'} 2016-08-10 15:45:29,221 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns1) id=5 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:29,243 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-10 15:45:29,353 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-10 15:45:29,443 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:29,553 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-10 15:45:29,556 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-10 15:45:29,556 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-10 15:45:29,556 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:29,556 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-10 15:45:29,772 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 549msec: CreateNamespaceProcedure (Namespace=ns1) id=5 owner=tyu state=FINISHED 2016-08-10 15:45:29,861 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-10 15:45:29,863 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.HMaster(2491): Client=tyu//10.22.16.34 creating {NAME => 'ns2'} 2016-08-10 15:45:29,971 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns2) id=6 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:29,975 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-10 15:45:30,079 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-10 15:45:30,188 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:30,284 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-10 15:45:30,298 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-10 15:45:30,301 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns2 with data: \x0A\x03ns2 2016-08-10 15:45:30,301 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-10 15:45:30,301 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:30,301 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-10 15:45:30,512 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 542msec: CreateNamespaceProcedure (Namespace=ns2) id=6 owner=tyu state=FINISHED 2016-08-10 15:45:30,587 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-10 15:45:30,589 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.HMaster(2491): Client=tyu//10.22.16.34 creating {NAME => 'ns3'} 2016-08-10 15:45:30,698 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns3) id=7 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:30,701 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-10 15:45:30,804 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-10 15:45:30,914 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:31,010 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-10 15:45:31,023 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-10 15:45:31,027 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns2 with data: \x0A\x03ns2 2016-08-10 15:45:31,027 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-10 15:45:31,027 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns3 with data: \x0A\x03ns3 2016-08-10 15:45:31,028 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:31,028 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-10 15:45:31,234 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 540msec: CreateNamespaceProcedure (Namespace=ns3) id=7 owner=tyu state=FINISHED 2016-08-10 15:45:31,317 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-10 15:45:31,319 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.HMaster(2491): Client=tyu//10.22.16.34 creating {NAME => 'ns4'} 2016-08-10 15:45:31,423 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns4) id=8 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-10 15:45:31,427 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-10 15:45:31,529 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-10 15:45:31,639 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:45:31,736 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-10 15:45:31,746 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-10 15:45:31,751 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns2 with data: \x0A\x03ns2 2016-08-10 15:45:31,751 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-10 15:45:31,751 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns4 with data: \x0A\x03ns4 2016-08-10 15:45:31,751 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns3 with data: \x0A\x03ns3 2016-08-10 15:45:31,751 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-10 15:45:31,752 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-10 15:45:31,963 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 536msec: CreateNamespaceProcedure (Namespace=ns4) id=8 owner=tyu state=FINISHED 2016-08-10 15:45:32,042 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-10 15:45:32,051 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns1:test-1470869129051', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:45:32,157 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns1:test-1470869129051) id=9 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:32,162 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:test-1470869129051/write-master:562260000000000 2016-08-10 15:45:32,172 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-10 15:45:32,274 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-10 15:45:32,285 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:32,288 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/test-1470869129051/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:32,289 INFO [RegionOpenAndInitThread-ns1:test-1470869129051-1] regionserver.HRegion(6162): creating HRegion ns1:test-1470869129051 HTD == 'ns1:test-1470869129051', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns1:test-1470869129051 2016-08-10 15:45:32,301 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:32,302 DEBUG [RegionOpenAndInitThread-ns1:test-1470869129051-1] regionserver.HRegion(736): Instantiated ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:32,302 DEBUG [RegionOpenAndInitThread-ns1:test-1470869129051-1] regionserver.HRegion(1419): Closing ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426.: disabling compactions & flushes 2016-08-10 15:45:32,302 DEBUG [RegionOpenAndInitThread-ns1:test-1470869129051-1] regionserver.HRegion(1446): Updates disabled for region ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:32,302 INFO [RegionOpenAndInitThread-ns1:test-1470869129051-1] regionserver.HRegion(1552): Closed ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:32,414 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":48}]},"row":"ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426."} 2016-08-10 15:45:32,415 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:32,417 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:32,482 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-10 15:45:32,523 INFO [ProcedureExecutor-1] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:45:32,524 ERROR [ProcedureExecutor-1] master.TableStateManager(134): Unable to get table ns1:test-1470869129051 state org.apache.hadoop.hbase.TableNotFoundException: ns1:test-1470869129051 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:32,524 INFO [ProcedureExecutor-1] master.RegionStates(1106): Transition {1af52b0fe0f87b7398a77bf958343426 state=OFFLINE, ts=1470869132523, server=null} to {1af52b0fe0f87b7398a77bf958343426 state=PENDING_OPEN, ts=1470869132524, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:32,524 INFO [ProcedureExecutor-1] master.RegionStateStore(207): Updating hbase:meta row ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:45:32,525 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:32,528 INFO [PriorityRpcServer.handler=0,queue=0,port=56228] regionserver.RSRpcServices(1666): Open ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:32,540 INFO [RS_OPEN_REGION-10.22.16.34:56228-1] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56228%2C1470869104167.regiongroup-2, suffix=, logDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167, archiveDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:45:32,543 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:32,547 INFO [RS_OPEN_REGION-10.22.16.34:56228-1] wal.FSHLog(1434): Slow sync cost: 4 ms, current pipeline: [] 2016-08-10 15:45:32,547 INFO [RS_OPEN_REGION-10.22.16.34:56228-1] wal.FSHLog(889): New WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:32,548 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(6339): Opening region: {ENCODED => 1af52b0fe0f87b7398a77bf958343426, NAME => 'ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:32,548 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-1470869129051 1af52b0fe0f87b7398a77bf958343426 2016-08-10 15:45:32,549 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(736): Instantiated ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:32,552 INFO [StoreOpener-1af52b0fe0f87b7398a77bf958343426-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:32,553 INFO [StoreOpener-1af52b0fe0f87b7398a77bf958343426-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:32,554 DEBUG [StoreOpener-1af52b0fe0f87b7398a77bf958343426-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f 2016-08-10 15:45:32,555 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426 2016-08-10 15:45:32,562 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:32,562 INFO [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(871): Onlined 1af52b0fe0f87b7398a77bf958343426; next sequenceid=2 2016-08-10 15:45:32,563 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:32,564 INFO [PostOpenDeployTasks:1af52b0fe0f87b7398a77bf958343426] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:32,565 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {1af52b0fe0f87b7398a77bf958343426 state=PENDING_OPEN, ts=1470869132524, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:45:32,565 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStates(1106): Transition {1af52b0fe0f87b7398a77bf958343426 state=PENDING_OPEN, ts=1470869132524, server=10.22.16.34,56228,1470869104167} to {1af52b0fe0f87b7398a77bf958343426 state=OPEN, ts=1470869132565, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:32,565 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:45:32,565 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:32,567 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStates(452): Onlined 1af52b0fe0f87b7398a77bf958343426 on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:32,567 DEBUG [ProcedureExecutor-1] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:45:32,567 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869132567,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:test-1470869129051"} 2016-08-10 15:45:32,567 ERROR [B.defaultRpcServer.handler=0,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns1:test-1470869129051 state org.apache.hadoop.hbase.TableNotFoundException: ns1:test-1470869129051 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:45:32,568 DEBUG [PostOpenDeployTasks:1af52b0fe0f87b7398a77bf958343426] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:32,568 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] handler.OpenRegionHandler(126): Opened ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:32,568 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:32,569 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1700): Updated table ns1:test-1470869129051 state to ENABLED in META 2016-08-10 15:45:32,786 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-10 15:45:32,896 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:test-1470869129051/write-master:562260000000000 2016-08-10 15:45:32,896 DEBUG [ProcedureExecutor-1] procedure2.ProcedureExecutor(870): Procedure completed in 736msec: CreateTableProcedure (table=ns1:test-1470869129051) id=9 owner=tyu state=FINISHED 2016-08-10 15:45:33,293 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-10 15:45:33,294 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns1:test-1470869129051 completed 2016-08-10 15:45:33,295 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xe3fda8 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:33,300 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xe3fda80x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:33,301 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cff057e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:33,302 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:33,302 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:33,302 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xe3fda8-0x15676a15116000c connected 2016-08-10 15:45:33,305 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:33,305 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56327; # active connections: 4 2016-08-10 15:45:33,306 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:33,306 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56327 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:33,314 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:33,314 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56328; # active connections: 2 2016-08-10 15:45:33,315 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:33,315 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56328 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:33,319 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,322 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,324 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,326 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,328 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,330 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,332 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,334 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,336 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,338 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,340 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,342 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,344 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,346 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,348 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,350 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,352 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,355 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,357 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,359 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,361 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,363 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,365 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,367 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,369 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,371 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,373 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,375 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,376 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,380 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,382 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,384 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,386 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,388 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,389 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,391 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,425 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,427 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,429 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,431 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,433 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,435 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,437 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,439 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,441 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,442 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,444 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,446 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,448 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,450 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,452 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,454 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,456 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,457 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,460 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,461 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,463 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,465 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,466 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,468 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,470 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,472 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,474 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,476 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,477 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,479 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,481 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,483 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,485 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,487 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,489 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,491 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,493 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,494 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,496 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,498 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,500 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,501 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,503 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,505 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,507 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,508 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,510 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,512 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,514 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,516 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,518 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,520 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,522 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,524 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,527 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,529 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,531 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,533 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,535 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,537 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,540 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,542 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,543 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,545 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,547 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,548 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,550 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,551 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,553 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,554 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,556 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,558 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,559 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,561 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,563 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,565 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,566 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,568 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,569 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,571 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,573 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,575 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,577 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,578 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,580 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,582 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,583 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,585 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,587 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,588 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,590 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,592 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,594 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,595 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,597 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,598 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,600 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,602 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,604 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,605 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,607 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,608 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,610 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,612 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,614 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,616 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,618 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,620 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,622 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,624 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,626 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,628 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,630 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,632 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,634 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,636 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,638 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,640 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,642 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,643 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,645 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,647 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,649 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,651 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,653 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,655 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,656 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,658 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,659 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,661 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,663 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,664 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,666 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,668 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,669 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,671 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,672 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,674 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,676 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,677 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,679 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,681 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,682 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,684 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,686 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,687 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,689 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,691 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,693 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,694 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,696 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,698 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,699 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,701 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,703 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,704 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,706 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,707 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,709 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,711 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,712 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,714 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,716 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:33,718 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns2:test-14708691290511', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:45:33,822 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns2:test-14708691290511) id=10 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:33,827 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-10 15:45:33,828 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:test-14708691290511/write-master:562260000000000 2016-08-10 15:45:33,934 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-10 15:45:33,950 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:33,953 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/test-14708691290511/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:33,955 INFO [RegionOpenAndInitThread-ns2:test-14708691290511-1] regionserver.HRegion(6162): creating HRegion ns2:test-14708691290511 HTD == 'ns2:test-14708691290511', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns2:test-14708691290511 2016-08-10 15:45:33,964 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:33,965 DEBUG [RegionOpenAndInitThread-ns2:test-14708691290511-1] regionserver.HRegion(736): Instantiated ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:33,965 DEBUG [RegionOpenAndInitThread-ns2:test-14708691290511-1] regionserver.HRegion(1419): Closing ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357.: disabling compactions & flushes 2016-08-10 15:45:33,965 DEBUG [RegionOpenAndInitThread-ns2:test-14708691290511-1] regionserver.HRegion(1446): Updates disabled for region ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:33,965 INFO [RegionOpenAndInitThread-ns2:test-14708691290511-1] regionserver.HRegion(1552): Closed ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:34,074 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":49}]},"row":"ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357."} 2016-08-10 15:45:34,076 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:34,077 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:34,137 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-10 15:45:34,184 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:45:34,185 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table ns2:test-14708691290511 state org.apache.hadoop.hbase.TableNotFoundException: ns2:test-14708691290511 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:34,185 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {a06bab69e6ee6a1a194d4fd364f48357 state=OFFLINE, ts=1470869134184, server=null} to {a06bab69e6ee6a1a194d4fd364f48357 state=PENDING_OPEN, ts=1470869134185, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:34,185 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:45:34,186 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:34,188 INFO [PriorityRpcServer.handler=3,queue=1,port=56228] regionserver.RSRpcServices(1666): Open ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:34,197 INFO [RS_OPEN_REGION-10.22.16.34:56228-2] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.16.34%2C56228%2C1470869104167.regiongroup-3, suffix=, logDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167, archiveDir=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:45:34,199 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,203 INFO [RS_OPEN_REGION-10.22.16.34:56228-2] wal.FSHLog(1434): Slow sync cost: 3 ms, current pipeline: [] 2016-08-10 15:45:34,203 INFO [RS_OPEN_REGION-10.22.16.34:56228-2] wal.FSHLog(889): New WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,204 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(6339): Opening region: {ENCODED => a06bab69e6ee6a1a194d4fd364f48357, NAME => 'ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:34,204 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-14708691290511 a06bab69e6ee6a1a194d4fd364f48357 2016-08-10 15:45:34,204 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(736): Instantiated ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:34,208 INFO [StoreOpener-a06bab69e6ee6a1a194d4fd364f48357-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:34,209 INFO [StoreOpener-a06bab69e6ee6a1a194d4fd364f48357-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:34,210 DEBUG [StoreOpener-a06bab69e6ee6a1a194d4fd364f48357-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f 2016-08-10 15:45:34,211 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357 2016-08-10 15:45:34,218 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:34,218 INFO [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(871): Onlined a06bab69e6ee6a1a194d4fd364f48357; next sequenceid=2 2016-08-10 15:45:34,219 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,220 INFO [PostOpenDeployTasks:a06bab69e6ee6a1a194d4fd364f48357] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:34,221 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {a06bab69e6ee6a1a194d4fd364f48357 state=PENDING_OPEN, ts=1470869134185, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:45:34,221 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStates(1106): Transition {a06bab69e6ee6a1a194d4fd364f48357 state=PENDING_OPEN, ts=1470869134185, server=10.22.16.34,56228,1470869104167} to {a06bab69e6ee6a1a194d4fd364f48357 state=OPEN, ts=1470869134221, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:34,221 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:45:34,222 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:34,223 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStates(452): Onlined a06bab69e6ee6a1a194d4fd364f48357 on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:34,223 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:45:34,223 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869134223,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:test-14708691290511"} 2016-08-10 15:45:34,223 ERROR [B.defaultRpcServer.handler=3,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns2:test-14708691290511 state org.apache.hadoop.hbase.TableNotFoundException: ns2:test-14708691290511 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:45:34,224 DEBUG [PostOpenDeployTasks:a06bab69e6ee6a1a194d4fd364f48357] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:34,224 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] handler.OpenRegionHandler(126): Opened ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:34,225 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:34,225 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns2:test-14708691290511 state to ENABLED in META 2016-08-10 15:45:34,443 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-10 15:45:34,557 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:test-14708691290511/write-master:562260000000000 2016-08-10 15:45:34,557 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 727msec: CreateTableProcedure (table=ns2:test-14708691290511) id=10 owner=tyu state=FINISHED 2016-08-10 15:45:34,949 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-10 15:45:34,949 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns2:test-14708691290511 completed 2016-08-10 15:45:34,955 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,959 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,962 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,964 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,966 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,968 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,970 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,972 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,973 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,975 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,977 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,979 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,981 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,983 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,985 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,987 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,989 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,990 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,992 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,994 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,996 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:34,998 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,000 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,002 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,004 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,006 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,008 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,010 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,012 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,013 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,015 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,017 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,019 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,020 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,022 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,024 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,026 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,028 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,030 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,032 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,033 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,035 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,037 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,040 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,042 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,044 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,046 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,048 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,050 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,052 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,055 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,057 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,058 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,060 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,062 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,063 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,065 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,067 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,068 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,070 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,071 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,072 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,074 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,075 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,076 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,078 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,079 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,081 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,082 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,083 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,085 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,086 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,088 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,089 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,090 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,092 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,093 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,095 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,096 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,098 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,099 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,101 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,102 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,104 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,106 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,108 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,109 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,111 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,113 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,114 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,116 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,118 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,119 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,121 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,123 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,124 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,126 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,127 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,129 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,131 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,132 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,134 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,135 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,137 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,139 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,141 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,142 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,144 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,145 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,147 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,148 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,150 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,152 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,154 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,155 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,157 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,158 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,160 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,162 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,164 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,166 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,167 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,169 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,170 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,172 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,173 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,175 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,176 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,178 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,179 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,180 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,182 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,183 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,185 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,187 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,189 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,190 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,192 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,195 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,196 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,198 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,200 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,202 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,204 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,206 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,208 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,210 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,212 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,214 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,215 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,217 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,219 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,221 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,222 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,224 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,225 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,228 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,229 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,231 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,233 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,234 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,236 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,238 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,239 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,241 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,242 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,244 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,245 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,247 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,249 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,250 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,252 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,254 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,256 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,257 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,259 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,261 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,263 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,264 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,266 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,268 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,269 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,270 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,272 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,273 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,274 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,276 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,277 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,278 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,280 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,281 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,282 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,284 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,285 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,286 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,288 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,289 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,291 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,292 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:35,294 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns3:test-14708691290512', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:45:35,403 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns3:test-14708691290512) id=11 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:35,407 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-10 15:45:35,409 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:test-14708691290512/write-master:562260000000000 2016-08-10 15:45:35,510 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-10 15:45:35,524 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:35,527 DEBUG [ProcedureExecutor-2] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/test-14708691290512/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:35,529 INFO [RegionOpenAndInitThread-ns3:test-14708691290512-1] regionserver.HRegion(6162): creating HRegion ns3:test-14708691290512 HTD == 'ns3:test-14708691290512', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns3:test-14708691290512 2016-08-10 15:45:35,537 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741848_1024{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:35,538 DEBUG [RegionOpenAndInitThread-ns3:test-14708691290512-1] regionserver.HRegion(736): Instantiated ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:35,538 DEBUG [RegionOpenAndInitThread-ns3:test-14708691290512-1] regionserver.HRegion(1419): Closing ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1.: disabling compactions & flushes 2016-08-10 15:45:35,538 DEBUG [RegionOpenAndInitThread-ns3:test-14708691290512-1] regionserver.HRegion(1446): Updates disabled for region ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:35,539 INFO [RegionOpenAndInitThread-ns3:test-14708691290512-1] regionserver.HRegion(1552): Closed ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:35,647 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":49}]},"row":"ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1."} 2016-08-10 15:45:35,649 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:35,650 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:35,716 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-10 15:45:35,756 INFO [ProcedureExecutor-2] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:45:35,757 ERROR [ProcedureExecutor-2] master.TableStateManager(134): Unable to get table ns3:test-14708691290512 state org.apache.hadoop.hbase.TableNotFoundException: ns3:test-14708691290512 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:35,757 INFO [ProcedureExecutor-2] master.RegionStates(1106): Transition {8229c2c41c671b66ea383beee31266e1 state=OFFLINE, ts=1470869135756, server=null} to {8229c2c41c671b66ea383beee31266e1 state=PENDING_OPEN, ts=1470869135757, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:35,757 INFO [ProcedureExecutor-2] master.RegionStateStore(207): Updating hbase:meta row ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:45:35,758 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:35,760 INFO [PriorityRpcServer.handler=2,queue=0,port=56228] regionserver.RSRpcServices(1666): Open ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:35,766 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(6339): Opening region: {ENCODED => 8229c2c41c671b66ea383beee31266e1, NAME => 'ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:35,766 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-14708691290512 8229c2c41c671b66ea383beee31266e1 2016-08-10 15:45:35,767 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(736): Instantiated ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:35,770 INFO [StoreOpener-8229c2c41c671b66ea383beee31266e1-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:35,771 INFO [StoreOpener-8229c2c41c671b66ea383beee31266e1-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:35,772 DEBUG [StoreOpener-8229c2c41c671b66ea383beee31266e1-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/test-14708691290512/8229c2c41c671b66ea383beee31266e1/f 2016-08-10 15:45:35,773 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/test-14708691290512/8229c2c41c671b66ea383beee31266e1 2016-08-10 15:45:35,779 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/test-14708691290512/8229c2c41c671b66ea383beee31266e1/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:35,779 INFO [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(871): Onlined 8229c2c41c671b66ea383beee31266e1; next sequenceid=2 2016-08-10 15:45:35,781 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:45:35,782 INFO [PostOpenDeployTasks:8229c2c41c671b66ea383beee31266e1] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:35,783 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {8229c2c41c671b66ea383beee31266e1 state=PENDING_OPEN, ts=1470869135757, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:45:35,783 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStates(1106): Transition {8229c2c41c671b66ea383beee31266e1 state=PENDING_OPEN, ts=1470869135757, server=10.22.16.34,56228,1470869104167} to {8229c2c41c671b66ea383beee31266e1 state=OPEN, ts=1470869135783, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:35,783 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:45:35,783 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:35,784 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStates(452): Onlined 8229c2c41c671b66ea383beee31266e1 on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:35,784 DEBUG [ProcedureExecutor-2] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:45:35,785 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869135784,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:test-14708691290512"} 2016-08-10 15:45:35,785 ERROR [B.defaultRpcServer.handler=3,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns3:test-14708691290512 state org.apache.hadoop.hbase.TableNotFoundException: ns3:test-14708691290512 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:45:35,785 DEBUG [PostOpenDeployTasks:8229c2c41c671b66ea383beee31266e1] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:35,786 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] handler.OpenRegionHandler(126): Opened ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:35,786 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:35,787 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns3:test-14708691290512 state to ENABLED in META 2016-08-10 15:45:36,019 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-10 15:45:36,114 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:test-14708691290512/write-master:562260000000000 2016-08-10 15:45:36,114 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 711msec: CreateTableProcedure (table=ns3:test-14708691290512) id=11 owner=tyu state=FINISHED 2016-08-10 15:45:36,523 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-10 15:45:36,524 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns3:test-14708691290512 completed 2016-08-10 15:45:36,541 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-10 15:45:36,548 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-10 15:45:36,551 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns4:test-14708691290513', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:45:36,654 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns4:test-14708691290513) id=12 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:45:36,658 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-10 15:45:36,660 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:test-14708691290513/write-master:562260000000000 2016-08-10 15:45:36,765 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-10 15:45:36,778 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:36,781 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns4/test-14708691290513/.tabledesc/.tableinfo.0000000001 2016-08-10 15:45:36,782 INFO [RegionOpenAndInitThread-ns4:test-14708691290513-1] regionserver.HRegion(6162): creating HRegion ns4:test-14708691290513 HTD == 'ns4:test-14708691290513', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns4:test-14708691290513 2016-08-10 15:45:36,792 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:36,793 DEBUG [RegionOpenAndInitThread-ns4:test-14708691290513-1] regionserver.HRegion(736): Instantiated ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:36,793 DEBUG [RegionOpenAndInitThread-ns4:test-14708691290513-1] regionserver.HRegion(1419): Closing ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971.: disabling compactions & flushes 2016-08-10 15:45:36,793 DEBUG [RegionOpenAndInitThread-ns4:test-14708691290513-1] regionserver.HRegion(1446): Updates disabled for region ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:36,793 INFO [RegionOpenAndInitThread-ns4:test-14708691290513-1] regionserver.HRegion(1552): Closed ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:36,906 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":49}]},"row":"ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971."} 2016-08-10 15:45:36,908 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:36,909 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:45:36,969 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-10 15:45:37,019 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:45:37,020 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table ns4:test-14708691290513 state org.apache.hadoop.hbase.TableNotFoundException: ns4:test-14708691290513 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:37,020 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {066be6466168f97a0986d6b8bafdb971 state=OFFLINE, ts=1470869137019, server=null} to {066be6466168f97a0986d6b8bafdb971 state=PENDING_OPEN, ts=1470869137020, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:37,020 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:45:37,021 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:37,023 INFO [PriorityRpcServer.handler=1,queue=1,port=56228] regionserver.RSRpcServices(1666): Open ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:37,028 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(6339): Opening region: {ENCODED => 066be6466168f97a0986d6b8bafdb971, NAME => 'ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:45:37,028 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-14708691290513 066be6466168f97a0986d6b8bafdb971 2016-08-10 15:45:37,029 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(736): Instantiated ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:37,032 INFO [StoreOpener-066be6466168f97a0986d6b8bafdb971-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:45:37,032 INFO [StoreOpener-066be6466168f97a0986d6b8bafdb971-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:45:37,034 DEBUG [StoreOpener-066be6466168f97a0986d6b8bafdb971-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/test-14708691290513/066be6466168f97a0986d6b8bafdb971/f 2016-08-10 15:45:37,035 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/test-14708691290513/066be6466168f97a0986d6b8bafdb971 2016-08-10 15:45:37,041 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/test-14708691290513/066be6466168f97a0986d6b8bafdb971/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:45:37,041 INFO [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(871): Onlined 066be6466168f97a0986d6b8bafdb971; next sequenceid=2 2016-08-10 15:45:37,041 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:37,046 INFO [PostOpenDeployTasks:066be6466168f97a0986d6b8bafdb971] regionserver.HRegionServer(1952): Post open deploy tasks for ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:37,047 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {066be6466168f97a0986d6b8bafdb971 state=PENDING_OPEN, ts=1470869137020, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:45:37,047 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(1106): Transition {066be6466168f97a0986d6b8bafdb971 state=PENDING_OPEN, ts=1470869137020, server=10.22.16.34,56228,1470869104167} to {066be6466168f97a0986d6b8bafdb971 state=OPEN, ts=1470869137047, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:45:37,047 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:45:37,048 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:37,049 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(452): Onlined 066be6466168f97a0986d6b8bafdb971 on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:37,049 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:45:37,049 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869137049,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:test-14708691290513"} 2016-08-10 15:45:37,049 ERROR [B.defaultRpcServer.handler=4,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns4:test-14708691290513 state org.apache.hadoop.hbase.TableNotFoundException: ns4:test-14708691290513 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:45:37,050 DEBUG [PostOpenDeployTasks:066be6466168f97a0986d6b8bafdb971] regionserver.HRegionServer(1979): Finished post open deploy task for ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:37,050 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] handler.OpenRegionHandler(126): Opened ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. on 10.22.16.34,56228,1470869104167 2016-08-10 15:45:37,050 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:37,051 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table ns4:test-14708691290513 state to ENABLED in META 2016-08-10 15:45:37,272 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-10 15:45:37,377 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:test-14708691290513/write-master:562260000000000 2016-08-10 15:45:37,377 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 720msec: CreateTableProcedure (table=ns4:test-14708691290513) id=12 owner=tyu state=FINISHED 2016-08-10 15:45:37,777 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-10 15:45:37,777 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns4:test-14708691290513 completed 2016-08-10 15:45:37,777 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-10 15:45:37,783 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-10 15:45:37,791 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116000c 2016-08-10 15:45:37,794 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:45:37,797 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56328 because read count=-1. Number of active connections: 2 2016-08-10 15:45:37,797 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56327 because read count=-1. Number of active connections: 4 2016-08-10 15:45:37,798 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (1763275380) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:45:37,798 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-1483916859) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:45:37,891 INFO [main] hbase.ResourceChecker(148): before: backup.TestIncrementalBackup#TestIncBackupRestore Thread=790, OpenFileDescriptor=1032, MaxFileDescriptor=10240, SystemLoadAverage=207, ProcessCount=267, AvailableMemoryMB=431 2016-08-10 15:45:37,892 WARN [main] hbase.ResourceChecker(135): Thread=790 is superior to 500 2016-08-10 15:45:37,892 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=1032 is superior to 1024 2016-08-10 15:45:37,892 INFO [main] backup.TestIncrementalBackup(50): create full backup image for all tables 2016-08-10 15:45:37,892 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xb319bc2 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:37,897 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xb319bc20x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:37,897 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25195818, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:37,898 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:37,898 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:37,898 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xb319bc2-0x15676a15116000d connected 2016-08-10 15:45:37,916 INFO [main] util.BackupClientUtil(107): Backup root dir hdfs://localhost:56218/backupUT does not exist. Will be created. 2016-08-10 15:45:37,921 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:37,921 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56340; # active connections: 4 2016-08-10 15:45:37,922 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:37,922 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56340 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:37,929 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:45:37,929 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56341; # active connections: 5 2016-08-10 15:45:37,930 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:37,930 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56341 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:37,949 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3479054d connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:45:37,952 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3479054d0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:45:37,953 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ddb8be9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:45:37,953 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:45:37,953 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:45:37,954 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3479054d-0x15676a15116000e connected 2016-08-10 15:45:37,954 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] backup.BackupInfo(125): CreateBackupContext: 4 ns1:test-1470869129051 2016-08-10 15:45:38,069 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-10 15:45:38,136 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure FullTableBackupProcedure (targetRootDir=hdfs://localhost:56218/backupUT; backupId=backup_1470869137937; tables=ns1:test-1470869129051,ns2:test-14708691290511,ns3:test-14708691290512,ns4:test-14708691290513) id=13 state=RUNNABLE:PRE_SNAPSHOT_TABLE added to the store. 2016-08-10 15:45:38,140 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:562260000000001 2016-08-10 15:45:38,143 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:38,144 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(130): Backup backup_1470869137937 started at 1470869138143. 2016-08-10 15:45:38,144 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1470869137937 set status=RUNNING 2016-08-10 15:45:38,156 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:38,156 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56343; # active connections: 6 2016-08-10 15:45:38,157 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:38,158 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56343 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:38,162 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:38,162 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56344; # active connections: 2 2016-08-10 15:45:38,163 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:38,163 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56344 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:38,164 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:38,165 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(134): Backup session backup_1470869137937 has been started. 2016-08-10 15:45:38,165 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(180): read backup start code from hbase:backup 2016-08-10 15:45:38,167 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(205): write backup start code to hbase:backup 0 2016-08-10 15:45:38,168 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:38,170 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(522): Execute roll log procedure for full backup ... 2016-08-10 15:45:38,187 DEBUG [ProcedureExecutor-4] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2016-08-10 15:45:38,188 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2016-08-10 15:45:38,188 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-10 15:45:38,188 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2016-08-10 15:45:38,188 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2016-08-10 15:45:38,189 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,189 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2016-08-10 15:45:38,190 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:45:38,190 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,190 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:45:38,190 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:45:38,190 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:45:38,190 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:45:38,190 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:45:38,191 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,191 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,191 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-10 15:45:38,191 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-10 15:45:38,191 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,191 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-10 15:45:38,191 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,191 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,192 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-10 15:45:38,192 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-10 15:45:38,192 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-10 15:45:38,192 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-10 15:45:38,192 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-10 15:45:38,192 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-10 15:45:38,202 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(53): Constructing a LogRollBackupSubprocedure. 2016-08-10 15:45:38,202 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(53): Constructing a LogRollBackupSubprocedure. 2016-08-10 15:45:38,202 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-10 15:45:38,203 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-10 15:45:38,203 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-10 15:45:38,203 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-10 15:45:38,203 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-10 15:45:38,204 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-10 15:45:38,204 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-10 15:45:38,204 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-10 15:45:38,204 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-10 15:45:38,205 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-10 15:45:38,205 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure (rolllog) in zk 2016-08-10 15:45:38,204 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure (rolllog) in zk 2016-08-10 15:45:38,206 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,206 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,206 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,206 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,207 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,207 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,207 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:38,207 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,207 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:38,207 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,207 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:38,207 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:45:38,208 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:38,208 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,208 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,209 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,209 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:38,210 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:38,210 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-10 15:45:38,210 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@79356d13[Count = 1] remaining members to acquire global barrier 2016-08-10 15:45:38,210 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,210 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,210 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,210 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,210 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:38,210 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:45:38,211 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:38,211 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,211 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,211 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,212 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:38,212 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:38,212 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-10 15:45:38,212 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@79356d13[Count = 0] remaining members to acquire global barrier 2016-08-10 15:45:38,212 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2016-08-10 15:45:38,212 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-10 15:45:38,213 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,213 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,213 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:38,213 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:45:38,213 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-10 15:45:38,214 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,214 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-10 15:45:38,214 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:38,214 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,214 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,215 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,215 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:38,215 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:38,215 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,216 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,221 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-10 15:45:38,221 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-10 15:45:38,221 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(72): ++ DRPC started: 10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,221 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(72): ++ DRPC started: 10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,221 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(77): Trying to roll log in backup subprocedure, current log number: 1470869107985 2016-08-10 15:45:38,221 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(77): Trying to roll log in backup subprocedure, current log number: 1470869107339 2016-08-10 15:45:38,224 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:45:38,225 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:45:38,231 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:45:38,231 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:45:38,235 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 83 2016-08-10 15:45:38,236 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741834_1010{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 387 2016-08-10 15:45:38,237 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] wal.FSHLog(885): Rolled WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:45:38,238 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] wal.FSHLog(952): Archiving hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:45:38,242 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): After roll log in backup subprocedure, current log number: 1470869138221 2016-08-10 15:45:38,242 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-10 15:45:38,245 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:38,246 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56347; # active connections: 3 2016-08-10 15:45:38,246 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:45:38,247 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56347 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:38,247 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:38,250 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-10 15:45:38,251 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:38,252 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-10 15:45:38,252 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.16.34,56226,1470869103454' in zk 2016-08-10 15:45:38,254 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,254 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-10 15:45:38,254 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,254 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,254 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,254 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:38,254 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:45:38,254 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:38,254 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-10 15:45:38,255 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:38,255 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,256 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,256 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,257 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:38,257 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:38,257 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,258 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,258 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.16.34,56226,1470869103454': 2016-08-10 15:45:38,258 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56226,1470869103454' released barrier for procedure'rolllog', counting down latch. Waiting for 1 more 2016-08-10 15:45:38,454 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:38,642 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] wal.FSHLog(885): Rolled WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 with entries=1, filesize=387 B; new WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:45:38,643 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] wal.FSHLog(952): Archiving hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:45:38,646 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): After roll log in backup subprocedure, current log number: 1470869138221 2016-08-10 15:45:38,646 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-10 15:45:38,649 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:45:38,650 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56348; # active connections: 7 2016-08-10 15:45:38,651 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu.hfs.0 (auth:SIMPLE) 2016-08-10 15:45:38,651 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56348 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:45:38,654 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-10 15:45:38,655 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:38,656 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-10 15:45:38,656 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.16.34,56228,1470869104167' in zk 2016-08-10 15:45:38,659 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,659 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-10 15:45:38,659 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,659 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:38,659 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-10 15:45:38,659 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,660 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,660 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:38,660 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:45:38,660 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:38,661 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,661 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,661 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,662 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:38,662 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:38,662 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,663 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,663 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,663 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.16.34,56228,1470869104167': 2016-08-10 15:45:38,664 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56228,1470869104167' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2016-08-10 15:45:38,664 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2016-08-10 15:45:38,664 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-10 15:45:38,664 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-10 15:45:38,664 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:rolllog 2016-08-10 15:45:38,664 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-10 15:45:38,665 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,665 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,665 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,665 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,665 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,665 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,665 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,665 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:38,666 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,665 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-10 15:45:38,666 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:45:38,666 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-10 15:45:38,666 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:45:38,666 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,666 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:38,666 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,666 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,667 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,667 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,667 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,667 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:38,668 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,668 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,668 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:38,668 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:45:38,669 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,669 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,669 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-10 15:45:38,669 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-10 15:45:38,670 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:45:38,670 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,672 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,673 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-10 15:45:38,673 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,673 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:45:38,673 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-10 15:45:38,673 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:45:38,673 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:45:38,673 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:45:38,673 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:45:38,673 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:45:38,673 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:38,673 INFO [ProcedureExecutor-4] master.LogRollMasterProcedureManager(116): Done waiting - exec procedure for rolllog 2016-08-10 15:45:38,674 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-10 15:45:38,674 INFO [ProcedureExecutor-4] master.LogRollMasterProcedureManager(117): Distributed roll log procedure is successful! 2016-08-10 15:45:38,675 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:45:38,674 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-10 15:45:38,675 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:45:38,675 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,675 DEBUG [ProcedureExecutor-4] procedure.MasterProcedureUtil(101): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 857 ms per retry) 2016-08-10 15:45:38,675 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:45:38,675 DEBUG [ProcedureExecutor-4] procedure.MasterProcedureUtil(110): (#1) Sleeping: 100ms while waiting for procedure completion. 2016-08-10 15:45:38,675 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:45:38,675 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-10 15:45:38,675 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-10 15:45:38,675 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-10 15:45:38,675 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:45:38,760 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:38,780 DEBUG [ProcedureExecutor-4] procedure.MasterProcedureUtil(116): Getting current status of procedure from master... 2016-08-10 15:45:38,781 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-10 15:45:38,812 WARN [ProcedureExecutor-4] wal.DefaultWALProvider(349): Cannot parse a server name from path=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429; Not a host:port pair: 10.22.16.34,56226,1470869103454.meta 2016-08-10 15:45:38,813 WARN [ProcedureExecutor-4] util.BackupServerUtil(237): Skip log file (can't parse): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:45:38,816 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(480): add WAL files to hbase:backup: backup_1470869137937 hdfs://localhost:56218/backupUT files [hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985] 2016-08-10 15:45:38,817 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:45:38,817 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:45:38,818 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:45:38,951 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1470869138934_ns1_test-1470869129051 from backupContext to request snapshot for backup. 2016-08-10 15:45:38,952 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1470869138934_ns1_test-1470869129051 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1470869138934_ns1_test-1470869129051' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:38,953 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-10 15:45:38,954 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-10 15:45:38,988 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:test-1470869129051/write-master:562260000000001 2016-08-10 15:45:38,989 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1470869138934_ns1_test-1470869129051 C_M_SNAPSHOT_TABLE on table ns1:test-1470869129051 2016-08-10 15:45:38,990 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1470869138934_ns1_test-1470869129051 table=ns1:test-1470869129051 type=FLUSH } 2016-08-10 15:45:38,991 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1470869138934_ns1_test-1470869129051 table=ns1:test-1470869129051 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-10 15:45:38,991 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-10 15:45:38,997 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741853_1029{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:38,999 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,000 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1470869138934_ns1_test-1470869129051' 2016-08-10 15:45:39,000 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:39,000 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(204): Procedure 'snapshot_1470869138934_ns1_test-1470869129051' starting 'acquire' 2016-08-10 15:45:39,000 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1470869138934_ns1_test-1470869129051', kicking off acquire phase on members. 2016-08-10 15:45:39,001 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,001 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,002 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:39,002 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,002 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:39,002 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:39,002 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:39,002 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:39,002 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:39,002 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,002 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-10 15:45:39,003 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,003 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,003 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,003 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 77 2016-08-10 15:45:39,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 77 2016-08-10 15:45:39,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,004 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869138934_ns1_test-1470869129051 from table ns1:test-1470869129051 type FLUSH 2016-08-10 15:45:39,004 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869138934_ns1_test-1470869129051 from table ns1:test-1470869129051 type FLUSH 2016-08-10 15:45:39,013 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,013 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,017 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' with timeout 300000ms 2016-08-10 15:45:39,017 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:39,017 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' with timeout 300000ms 2016-08-10 15:45:39,018 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:39,017 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' starting 'acquire' stage 2016-08-10 15:45:39,018 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' locally acquired 2016-08-10 15:45:39,018 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure (snapshot_1470869138934_ns1_test-1470869129051) in zk 2016-08-10 15:45:39,018 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' starting 'acquire' stage 2016-08-10 15:45:39,018 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' locally acquired 2016-08-10 15:45:39,018 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure (snapshot_1470869138934_ns1_test-1470869129051) in zk 2016-08-10 15:45:39,019 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,020 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,020 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,020 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,020 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,020 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,020 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:39,020 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,020 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,020 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:39,020 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:39,020 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:39,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:39,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,022 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,022 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:39,023 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:39,023 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure 'snapshot_1470869138934_ns1_test-1470869129051' on coordinator 2016-08-10 15:45:39,023 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@375bc767[Count = 0] remaining members to acquire global barrier 2016-08-10 15:45:39,023 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(212): Procedure 'snapshot_1470869138934_ns1_test-1470869129051' starting 'in-barrier' execution. 2016-08-10 15:45:39,023 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,024 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,024 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,024 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,024 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,024 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,024 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,024 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-10 15:45:39,024 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,024 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' received 'reached' from coordinator. 2016-08-10 15:45:39,024 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' received 'reached' from coordinator. 2016-08-10 15:45:39,024 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,025 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:39,025 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:39,025 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' locally completed 2016-08-10 15:45:39,025 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869138934_ns1_test-1470869129051' completed for member '10.22.16.34,56226,1470869103454' in zk 2016-08-10 15:45:39,025 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:39,025 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,026 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' has notified controller of completion 2016-08-10 15:45:39,026 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,026 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-10 15:45:39,026 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:39,026 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:39,026 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. started... 2016-08-10 15:45:39,026 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-10 15:45:39,027 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,026 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' completed. 2016-08-10 15:45:39,027 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=32.57 KB 2016-08-10 15:45:39,027 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:39,028 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:39,028 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,028 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,028 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,045 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:39,092 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:39,092 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1470869138934_ns1_test-1470869129051 table=ns1:test-1470869129051 type=FLUSH }' is still in progress! 2016-08-10 15:45:39,092 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#2) Sleeping: 200ms while waiting for snapshot completion. 2016-08-10 15:45:39,246 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741854_1030{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 12093 2016-08-10 15:45:39,268 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:39,294 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:39,294 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1470869138934_ns1_test-1470869129051 table=ns1:test-1470869129051 type=FLUSH }' is still in progress! 2016-08-10 15:45:39,294 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#3) Sleeping: 300ms while waiting for snapshot completion. 2016-08-10 15:45:39,598 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:39,598 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1470869138934_ns1_test-1470869129051 table=ns1:test-1470869129051 type=FLUSH }' is still in progress! 2016-08-10 15:45:39,598 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#4) Sleeping: 500ms while waiting for snapshot completion. 2016-08-10 15:45:39,653 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=203, memsize=32.6 K, hasBloomFilter=true, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/.tmp/316c589ae70c468088bcdd6144bb4090 2016-08-10 15:45:39,907 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/.tmp/316c589ae70c468088bcdd6144bb4090 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 2016-08-10 15:45:39,916 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090, entries=199, sequenceid=203, filesize=11.8 K 2016-08-10 15:45:39,917 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:45:39,918 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] regionserver.HRegion(2545): Finished memstore flush of ~32.57 KB/33352, currentsize=0 B/0 for region ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. in 891ms, sequenceid=203, compaction requested=false 2016-08-10 15:45:39,927 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.SnapshotManifest(203): Storing 'ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426.' region-info for snapshot. 2016-08-10 15:45:39,933 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-10 15:45:39,936 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090] hfiles 2016-08-10 15:45:39,936 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.SnapshotManifest(226): Adding reference for file (1/1): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 2016-08-10 15:45:39,971 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:39,971 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. completed. 2016-08-10 15:45:39,972 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool21-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:45:39,972 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-10 15:45:39,972 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-10 15:45:39,972 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,972 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' locally completed 2016-08-10 15:45:39,972 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869138934_ns1_test-1470869129051' completed for member '10.22.16.34,56228,1470869104167' in zk 2016-08-10 15:45:39,973 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' has notified controller of completion 2016-08-10 15:45:39,973 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,973 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:39,973 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869138934_ns1_test-1470869129051' completed. 2016-08-10 15:45:39,973 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,974 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,974 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,974 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:39,974 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:39,974 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:39,975 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,975 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,975 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,976 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:39,976 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:39,976 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,976 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,977 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,977 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1470869138934_ns1_test-1470869129051' member '10.22.16.34,56228,1470869104167': 2016-08-10 15:45:39,977 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56228,1470869104167' released barrier for procedure'snapshot_1470869138934_ns1_test-1470869129051', counting down latch. Waiting for 0 more 2016-08-10 15:45:39,977 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(221): Procedure 'snapshot_1470869138934_ns1_test-1470869129051' execution completed 2016-08-10 15:45:39,977 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-10 15:45:39,977 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-10 15:45:39,977 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,977 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1470869138934_ns1_test-1470869129051including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-10 15:45:39,978 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,978 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,978 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,978 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,978 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,978 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,979 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,979 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:39,979 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:39,979 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,979 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:39,979 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:39,979 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:39,979 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:39,979 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,979 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,979 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,980 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,980 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,980 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:39,980 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,981 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,981 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,981 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:39,981 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,981 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,982 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,983 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:39,983 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:39,983 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:39,983 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:39,983 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,983 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 because node does not exist (not an error) 2016-08-10 15:45:39,983 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-10 15:45:39,983 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:39,983 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:39,984 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:39,984 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:39,984 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:39,984 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:39,984 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,984 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,984 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,984 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,984 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:39,984 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:39,984 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:39,985 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56226,1470869103454 2016-08-10 15:45:39,985 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,985 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051/10.22.16.34,56228,1470869104167 2016-08-10 15:45:39,985 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,985 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:39,994 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:40,009 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:40,011 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741855_1031 127.0.0.1:56219 2016-08-10 15:45:40,033 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869138934_ns1_test-1470869129051 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:40,034 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1470869138934_ns1_test-1470869129051 of table ns1:test-1470869129051 completed 2016-08-10 15:45:40,034 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:45:40,036 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:test-1470869129051/write-master:562260000000001 2016-08-10 15:45:40,101 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:40,101 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1470869138934_ns1_test-1470869129051 table=ns1:test-1470869129051 type=FLUSH }' has completed, notifying client. 2016-08-10 15:45:40,101 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1470869140101_ns2_test-14708691290511 from backupContext to request snapshot for backup. 2016-08-10 15:45:40,103 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1470869140101_ns2_test-14708691290511 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1470869140101_ns2_test-14708691290511' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:40,104 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-10 15:45:40,105 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-10 15:45:40,111 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:test-14708691290511/write-master:562260000000001 2016-08-10 15:45:40,111 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1470869140101_ns2_test-14708691290511 table=ns2:test-14708691290511 type=FLUSH } 2016-08-10 15:45:40,111 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1470869140101_ns2_test-14708691290511 C_M_SNAPSHOT_TABLE on table ns2:test-14708691290511 2016-08-10 15:45:40,111 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1470869140101_ns2_test-14708691290511 table=ns2:test-14708691290511 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-10 15:45:40,111 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-10 15:45:40,118 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741857_1033{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:40,120 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,121 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1470869140101_ns2_test-14708691290511' 2016-08-10 15:45:40,121 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:40,121 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(204): Procedure 'snapshot_1470869140101_ns2_test-14708691290511' starting 'acquire' 2016-08-10 15:45:40,121 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1470869140101_ns2_test-14708691290511', kicking off acquire phase on members. 2016-08-10 15:45:40,122 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,122 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,123 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:40,123 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,123 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:40,123 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:40,123 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:40,123 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:40,123 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:40,123 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,123 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-10 15:45:40,124 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,124 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,124 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,124 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,124 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-10 15:45:40,124 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-10 15:45:40,124 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,125 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,125 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869140101_ns2_test-14708691290511 from table ns2:test-14708691290511 type FLUSH 2016-08-10 15:45:40,125 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869140101_ns2_test-14708691290511 from table ns2:test-14708691290511 type FLUSH 2016-08-10 15:45:40,125 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,125 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,125 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' with timeout 300000ms 2016-08-10 15:45:40,125 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' with timeout 300000ms 2016-08-10 15:45:40,125 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:40,125 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:40,126 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' starting 'acquire' stage 2016-08-10 15:45:40,126 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' starting 'acquire' stage 2016-08-10 15:45:40,126 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' locally acquired 2016-08-10 15:45:40,126 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' locally acquired 2016-08-10 15:45:40,126 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure (snapshot_1470869140101_ns2_test-14708691290511) in zk 2016-08-10 15:45:40,126 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure (snapshot_1470869140101_ns2_test-14708691290511) in zk 2016-08-10 15:45:40,127 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,127 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,127 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,127 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,127 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,127 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,127 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:40,127 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:40,127 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,127 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:40,127 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,127 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:40,128 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:40,128 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,128 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,128 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,129 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:40,129 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:40,129 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure 'snapshot_1470869140101_ns2_test-14708691290511' on coordinator 2016-08-10 15:45:40,129 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@26cd16d[Count = 0] remaining members to acquire global barrier 2016-08-10 15:45:40,129 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(212): Procedure 'snapshot_1470869140101_ns2_test-14708691290511' starting 'in-barrier' execution. 2016-08-10 15:45:40,129 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' received 'reached' from coordinator. 2016-08-10 15:45:40,130 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,130 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,130 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-10 15:45:40,130 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' locally completed 2016-08-10 15:45:40,130 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' received 'reached' from coordinator. 2016-08-10 15:45:40,130 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869140101_ns2_test-14708691290511' completed for member '10.22.16.34,56226,1470869103454' in zk 2016-08-10 15:45:40,130 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:40,130 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:40,130 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-10 15:45:40,131 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-10 15:45:40,131 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:40,131 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. started... 2016-08-10 15:45:40,131 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:40,131 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=32.57 KB 2016-08-10 15:45:40,131 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' has notified controller of completion 2016-08-10 15:45:40,131 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,131 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:40,132 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' completed. 2016-08-10 15:45:40,132 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:40,132 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,132 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,133 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:40,133 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:40,133 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,134 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,134 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,152 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741858_1034{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:40,152 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=203, memsize=32.6 K, hasBloomFilter=true, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/.tmp/0d7711c716f649a68e90fec66516fa56 2016-08-10 15:45:40,163 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/.tmp/0d7711c716f649a68e90fec66516fa56 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 2016-08-10 15:45:40,171 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56, entries=199, sequenceid=203, filesize=11.8 K 2016-08-10 15:45:40,171 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:45:40,172 INFO [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] regionserver.HRegion(2545): Finished memstore flush of ~32.57 KB/33352, currentsize=0 B/0 for region ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. in 41ms, sequenceid=203, compaction requested=false 2016-08-10 15:45:40,173 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(203): Storing 'ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357.' region-info for snapshot. 2016-08-10 15:45:40,173 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-10 15:45:40,173 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56] hfiles 2016-08-10 15:45:40,173 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(226): Adding reference for file (1/1): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 2016-08-10 15:45:40,180 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741859_1035{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 91 2016-08-10 15:45:40,212 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:40,212 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1470869140101_ns2_test-14708691290511 table=ns2:test-14708691290511 type=FLUSH }' is still in progress! 2016-08-10 15:45:40,212 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#2) Sleeping: 200ms while waiting for snapshot completion. 2016-08-10 15:45:40,271 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:40,417 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:40,418 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1470869140101_ns2_test-14708691290511 table=ns2:test-14708691290511 type=FLUSH }' is still in progress! 2016-08-10 15:45:40,418 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#3) Sleeping: 300ms while waiting for snapshot completion. 2016-08-10 15:45:40,585 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. completed. 2016-08-10 15:45:40,585 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:45:40,585 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-10 15:45:40,586 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-10 15:45:40,586 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,586 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' locally completed 2016-08-10 15:45:40,586 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869140101_ns2_test-14708691290511' completed for member '10.22.16.34,56228,1470869104167' in zk 2016-08-10 15:45:40,590 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,590 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,590 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,590 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,590 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:40,590 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:40,590 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' has notified controller of completion 2016-08-10 15:45:40,590 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:40,590 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869140101_ns2_test-14708691290511' completed. 2016-08-10 15:45:40,591 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:40,592 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,592 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,593 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,593 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:40,594 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:40,594 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,594 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,595 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,595 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1470869140101_ns2_test-14708691290511' member '10.22.16.34,56228,1470869104167': 2016-08-10 15:45:40,595 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56228,1470869104167' released barrier for procedure'snapshot_1470869140101_ns2_test-14708691290511', counting down latch. Waiting for 0 more 2016-08-10 15:45:40,595 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(221): Procedure 'snapshot_1470869140101_ns2_test-14708691290511' execution completed 2016-08-10 15:45:40,596 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-10 15:45:40,596 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-10 15:45:40,596 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,596 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1470869140101_ns2_test-14708691290511including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-10 15:45:40,597 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,597 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,597 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,597 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,597 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,597 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,598 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,598 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:40,598 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:40,598 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:40,598 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,598 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:40,598 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:40,598 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:40,598 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,598 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,598 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,599 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,599 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,599 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:40,600 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,600 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,600 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,600 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:40,600 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,601 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,601 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,602 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:40,602 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:40,602 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:40,602 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:40,602 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,603 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-10 15:45:40,603 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:40,603 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 because node does not exist (not an error) 2016-08-10 15:45:40,603 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:40,603 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:40,603 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:40,603 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:40,603 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:40,604 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:40,604 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:40,604 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,604 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,612 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741860_1036{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:40,614 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741859_1035 127.0.0.1:56219 2016-08-10 15:45:40,621 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869140101_ns2_test-14708691290511 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,622 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1470869140101_ns2_test-14708691290511 of table ns2:test-14708691290511 completed 2016-08-10 15:45:40,622 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:40,624 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:test-14708691290511/write-master:562260000000001 2016-08-10 15:45:40,719 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:40,719 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1470869140101_ns2_test-14708691290511 table=ns2:test-14708691290511 type=FLUSH }' has completed, notifying client. 2016-08-10 15:45:40,719 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1470869140719_ns3_test-14708691290512 from backupContext to request snapshot for backup. 2016-08-10 15:45:40,721 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1470869140719_ns3_test-14708691290512 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1470869140719_ns3_test-14708691290512' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:40,722 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-10 15:45:40,723 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-10 15:45:40,730 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:test-14708691290512/write-master:562260000000001 2016-08-10 15:45:40,730 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1470869140719_ns3_test-14708691290512 table=ns3:test-14708691290512 type=FLUSH } 2016-08-10 15:45:40,730 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1470869140719_ns3_test-14708691290512 C_M_SNAPSHOT_TABLE on table ns3:test-14708691290512 2016-08-10 15:45:40,730 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1470869140719_ns3_test-14708691290512 table=ns3:test-14708691290512 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-10 15:45:40,730 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-10 15:45:40,737 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741861_1037{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:40,739 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,739 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1470869140719_ns3_test-14708691290512' 2016-08-10 15:45:40,739 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:40,740 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(204): Procedure 'snapshot_1470869140719_ns3_test-14708691290512' starting 'acquire' 2016-08-10 15:45:40,740 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1470869140719_ns3_test-14708691290512', kicking off acquire phase on members. 2016-08-10 15:45:40,740 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,741 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,741 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:40,741 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,741 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:40,741 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:40,741 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:40,741 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:40,742 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:40,742 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,742 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-10 15:45:40,742 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,742 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,742 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,742 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,743 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-10 15:45:40,743 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,743 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-10 15:45:40,743 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869140719_ns3_test-14708691290512 from table ns3:test-14708691290512 type FLUSH 2016-08-10 15:45:40,743 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,743 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869140719_ns3_test-14708691290512 from table ns3:test-14708691290512 type FLUSH 2016-08-10 15:45:40,743 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,743 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,743 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' with timeout 300000ms 2016-08-10 15:45:40,743 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' with timeout 300000ms 2016-08-10 15:45:40,743 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:40,743 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:40,744 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' starting 'acquire' stage 2016-08-10 15:45:40,744 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' starting 'acquire' stage 2016-08-10 15:45:40,744 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' locally acquired 2016-08-10 15:45:40,744 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' locally acquired 2016-08-10 15:45:40,744 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure (snapshot_1470869140719_ns3_test-14708691290512) in zk 2016-08-10 15:45:40,744 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure (snapshot_1470869140719_ns3_test-14708691290512) in zk 2016-08-10 15:45:40,745 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,745 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,745 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,745 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,745 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,745 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,745 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:40,745 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,745 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:40,745 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:40,745 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,745 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:40,746 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:40,746 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,746 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,746 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,746 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:40,747 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:40,747 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure 'snapshot_1470869140719_ns3_test-14708691290512' on coordinator 2016-08-10 15:45:40,747 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@3512a826[Count = 0] remaining members to acquire global barrier 2016-08-10 15:45:40,747 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(212): Procedure 'snapshot_1470869140719_ns3_test-14708691290512' starting 'in-barrier' execution. 2016-08-10 15:45:40,747 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,747 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,747 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,748 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,748 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,748 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,748 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,748 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-10 15:45:40,748 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' received 'reached' from coordinator. 2016-08-10 15:45:40,748 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,748 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' locally completed 2016-08-10 15:45:40,748 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,748 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869140719_ns3_test-14708691290512' completed for member '10.22.16.34,56226,1470869103454' in zk 2016-08-10 15:45:40,748 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:40,748 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:40,748 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' received 'reached' from coordinator. 2016-08-10 15:45:40,748 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-10 15:45:40,749 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:40,749 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:40,749 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-10 15:45:40,749 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. started... 2016-08-10 15:45:40,749 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' has notified controller of completion 2016-08-10 15:45:40,749 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,749 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:40,750 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' completed. 2016-08-10 15:45:40,750 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool25-thread-1] snapshot.SnapshotManifest(203): Storing 'ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1.' region-info for snapshot. 2016-08-10 15:45:40,750 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool25-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-10 15:45:40,750 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:40,750 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool25-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [] hfiles 2016-08-10 15:45:40,750 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,751 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:40,751 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:40,751 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,752 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:40,752 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:40,759 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741862_1038{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 52 2016-08-10 15:45:40,832 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:40,833 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1470869140719_ns3_test-14708691290512 table=ns3:test-14708691290512 type=FLUSH }' is still in progress! 2016-08-10 15:45:40,833 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#2) Sleeping: 200ms while waiting for snapshot completion. 2016-08-10 15:45:41,038 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:41,039 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1470869140719_ns3_test-14708691290512 table=ns3:test-14708691290512 type=FLUSH }' is still in progress! 2016-08-10 15:45:41,039 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#3) Sleeping: 300ms while waiting for snapshot completion. 2016-08-10 15:45:41,165 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. completed. 2016-08-10 15:45:41,165 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:45:41,165 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-10 15:45:41,166 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-10 15:45:41,166 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,166 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' locally completed 2016-08-10 15:45:41,166 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869140719_ns3_test-14708691290512' completed for member '10.22.16.34,56228,1470869104167' in zk 2016-08-10 15:45:41,169 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,169 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' has notified controller of completion 2016-08-10 15:45:41,169 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:41,170 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869140719_ns3_test-14708691290512' completed. 2016-08-10 15:45:41,169 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,170 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,171 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,171 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:41,171 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:41,171 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:41,172 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,172 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,172 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,173 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:41,173 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:41,173 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,174 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,174 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,175 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1470869140719_ns3_test-14708691290512' member '10.22.16.34,56228,1470869104167': 2016-08-10 15:45:41,175 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56228,1470869104167' released barrier for procedure'snapshot_1470869140719_ns3_test-14708691290512', counting down latch. Waiting for 0 more 2016-08-10 15:45:41,175 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(221): Procedure 'snapshot_1470869140719_ns3_test-14708691290512' execution completed 2016-08-10 15:45:41,175 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-10 15:45:41,175 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-10 15:45:41,175 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,175 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1470869140719_ns3_test-14708691290512including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-10 15:45:41,176 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,176 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,176 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,176 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,176 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,176 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,177 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,177 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,177 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:41,177 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:41,177 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:41,177 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:41,177 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:41,177 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,177 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:41,177 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,177 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,178 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,178 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:41,179 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,179 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:41,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,180 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,180 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,181 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:41,181 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:41,181 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:41,181 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:41,181 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 because node does not exist (not an error) 2016-08-10 15:45:41,181 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,182 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:41,182 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:41,182 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:41,182 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:41,182 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-10 15:45:41,183 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:41,183 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:41,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:41,183 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:41,183 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:41,184 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,184 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,184 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,184 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,184 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,184 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,192 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741863_1039{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:41,193 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741862_1038 127.0.0.1:56219 2016-08-10 15:45:41,198 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869140719_ns3_test-14708691290512 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,200 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1470869140719_ns3_test-14708691290512 of table ns3:test-14708691290512 completed 2016-08-10 15:45:41,200 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:45:41,201 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:test-14708691290512/write-master:562260000000001 2016-08-10 15:45:41,340 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:41,341 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1470869140719_ns3_test-14708691290512 table=ns3:test-14708691290512 type=FLUSH }' has completed, notifying client. 2016-08-10 15:45:41,341 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1470869141341_ns4_test-14708691290513 from backupContext to request snapshot for backup. 2016-08-10 15:45:41,342 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1470869141341_ns4_test-14708691290513 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1470869141341_ns4_test-14708691290513' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:45:41,344 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-10 15:45:41,345 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-10 15:45:41,351 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:test-14708691290513/write-master:562260000000001 2016-08-10 15:45:41,351 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1470869141341_ns4_test-14708691290513 table=ns4:test-14708691290513 type=FLUSH } 2016-08-10 15:45:41,351 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1470869141341_ns4_test-14708691290513 C_M_SNAPSHOT_TABLE on table ns4:test-14708691290513 2016-08-10 15:45:41,351 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1470869141341_ns4_test-14708691290513 table=ns4:test-14708691290513 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-10 15:45:41,351 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-10 15:45:41,358 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741864_1040{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:41,360 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,361 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1470869141341_ns4_test-14708691290513' 2016-08-10 15:45:41,361 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:41,361 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(204): Procedure 'snapshot_1470869141341_ns4_test-14708691290513' starting 'acquire' 2016-08-10 15:45:41,361 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1470869141341_ns4_test-14708691290513', kicking off acquire phase on members. 2016-08-10 15:45:41,362 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,362 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,362 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:41,363 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,363 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:41,363 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:41,362 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:41,363 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:41,363 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:41,363 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,363 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-10 15:45:41,363 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,363 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,363 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,364 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,364 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-10 15:45:41,364 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,364 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869141341_ns4_test-14708691290513 from table ns4:test-14708691290513 type FLUSH 2016-08-10 15:45:41,364 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-10 15:45:41,364 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,364 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,364 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1470869141341_ns4_test-14708691290513 from table ns4:test-14708691290513 type FLUSH 2016-08-10 15:45:41,364 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' with timeout 300000ms 2016-08-10 15:45:41,364 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:41,364 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' starting 'acquire' stage 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' locally acquired 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' with timeout 300000ms 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure (snapshot_1470869141341_ns4_test-14708691290513) in zk 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' starting 'acquire' stage 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' locally acquired 2016-08-10 15:45:41,365 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure (snapshot_1470869141341_ns4_test-14708691290513) in zk 2016-08-10 15:45:41,366 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,366 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,366 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,366 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,366 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,366 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,366 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:41,366 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,366 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,366 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:41,367 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:45:41,367 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:41,367 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:41,367 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,368 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,368 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,368 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:41,368 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:41,368 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure 'snapshot_1470869141341_ns4_test-14708691290513' on coordinator 2016-08-10 15:45:41,369 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@265d352c[Count = 0] remaining members to acquire global barrier 2016-08-10 15:45:41,369 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(212): Procedure 'snapshot_1470869141341_ns4_test-14708691290513' starting 'in-barrier' execution. 2016-08-10 15:45:41,369 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,369 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,369 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,369 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,369 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,369 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,369 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,369 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,369 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-10 15:45:41,369 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' received 'reached' from coordinator. 2016-08-10 15:45:41,369 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,370 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:41,370 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:41,370 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' locally completed 2016-08-10 15:45:41,369 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' received 'reached' from coordinator. 2016-08-10 15:45:41,370 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869141341_ns4_test-14708691290513' completed for member '10.22.16.34,56226,1470869103454' in zk 2016-08-10 15:45:41,370 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-10 15:45:41,370 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:41,370 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:41,370 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-10 15:45:41,370 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. started... 2016-08-10 15:45:41,371 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,371 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' has notified controller of completion 2016-08-10 15:45:41,371 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,371 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:41,371 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool28-thread-1] snapshot.SnapshotManifest(203): Storing 'ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971.' region-info for snapshot. 2016-08-10 15:45:41,371 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' completed. 2016-08-10 15:45:41,372 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool28-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-10 15:45:41,372 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,372 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool28-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [] hfiles 2016-08-10 15:45:41,372 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:41,372 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:41,372 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,373 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,373 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,377 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741865_1041{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:41,378 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. completed. 2016-08-10 15:45:41,378 DEBUG [rs(10.22.16.34,56228,1470869104167)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:45:41,378 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-10 15:45:41,378 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-10 15:45:41,378 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,378 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' locally completed 2016-08-10 15:45:41,378 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1470869141341_ns4_test-14708691290513' completed for member '10.22.16.34,56228,1470869104167' in zk 2016-08-10 15:45:41,379 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,379 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' has notified controller of completion 2016-08-10 15:45:41,379 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,379 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:41,379 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,379 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1470869141341_ns4_test-14708691290513' completed. 2016-08-10 15:45:41,380 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,380 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:41,380 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:41,380 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:41,381 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,381 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,381 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,381 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:41,381 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:41,382 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,382 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,382 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,383 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1470869141341_ns4_test-14708691290513' member '10.22.16.34,56228,1470869104167': 2016-08-10 15:45:41,383 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56228,1470869104167' released barrier for procedure'snapshot_1470869141341_ns4_test-14708691290513', counting down latch. Waiting for 0 more 2016-08-10 15:45:41,383 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(221): Procedure 'snapshot_1470869141341_ns4_test-14708691290513' execution completed 2016-08-10 15:45:41,383 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-10 15:45:41,383 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-10 15:45:41,383 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,383 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1470869141341_ns4_test-14708691290513including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-10 15:45:41,384 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,384 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,384 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,384 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,384 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,384 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,384 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,384 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:45:41,384 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,384 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-10 15:45:41,384 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:41,385 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:41,385 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:41,385 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,385 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:45:41,385 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,385 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,385 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,386 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,386 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,386 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:45:41,386 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,386 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,386 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:45:41,387 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,387 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,387 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,388 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:41,388 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:41,388 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:41,388 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:45:41,388 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,388 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:41,389 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-10 15:45:41,389 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 because node does not exist (not an error) 2016-08-10 15:45:41,389 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:41,389 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:41,389 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-10 15:45:41,389 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-10 15:45:41,389 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-10 15:45:41,390 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-10 15:45:41,390 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-10 15:45:41,390 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56226,1470869103454 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513/10.22.16.34,56228,1470869104167 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,390 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,398 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741866_1042{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:41,399 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741865_1041 127.0.0.1:56219 2016-08-10 15:45:41,407 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869141341_ns4_test-14708691290513 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,408 INFO [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1470869141341_ns4_test-14708691290513 of table ns4:test-14708691290513 completed 2016-08-10 15:45:41,408 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.hbase-snapshot/.tmp/snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:41,409 DEBUG [MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:test-14708691290513/write-master:562260000000001 2016-08-10 15:45:41,455 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-10 15:45:41,455 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1470869141341_ns4_test-14708691290513 table=ns4:test-14708691290513 type=FLUSH }' has completed, notifying client. 2016-08-10 15:45:41,562 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(577): snapshot copy for backup_1470869137937 2016-08-10 15:45:41,562 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(292): Snapshot copy is starting. 2016-08-10 15:45:41,567 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(304): There are 4 snapshots to be copied. 2016-08-10 15:45:41,567 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1470869141341_ns4_test-14708691290513 to hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513/ 2016-08-10 15:45:41,587 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(286): Doing SNAPSHOT_COPY 2016-08-10 15:45:41,604 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:56218 inputRoot=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 2016-08-10 15:45:41,617 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:56218 outputRoot=hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513 2016-08-10 15:45:41,619 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-10 15:45:41,633 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741867_1043{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:41,646 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741868_1044{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 346 2016-08-10 15:45:42,074 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-10 15:45:42,276 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:42,355 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741859_1035, blk_1073741862_1038, blk_1073741865_1041, blk_1073741855_1031] 2016-08-10 15:45:42,610 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-8734128276122202279.jar 2016-08-10 15:45:45,668 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6074627716217920294.jar 2016-08-10 15:45:46,283 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:46,763 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-3915286939335218468.jar 2016-08-10 15:45:46,786 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-2782855988308438419.jar 2016-08-10 15:45:50,352 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-3623433101768823422.jar 2016-08-10 15:45:50,352 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-10 15:45:50,353 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-10 15:45:50,353 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-10 15:45:50,353 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:45:50,354 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-10 15:45:50,354 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-10 15:45:50,356 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:50,357 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:50,357 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:45:50,358 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:50,358 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:50,358 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:45:50,359 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:45:50,427 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:45:50,435 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-10 15:45:50,698 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1470869141341_ns4_test-14708691290513' hfile list 2016-08-10 15:45:52,320 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-10 15:45:52,321 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-10 15:45:52,326 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:45:52,327 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1470869141341_ns4_test-14708691290513 finished. 2016-08-10 15:45:52,327 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1470869140101_ns2_test-14708691290511 to hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/ 2016-08-10 15:45:52,327 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(286): Doing SNAPSHOT_COPY 2016-08-10 15:45:52,340 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:56218 inputRoot=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 2016-08-10 15:45:52,353 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:56218 outputRoot=hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511 2016-08-10 15:45:52,355 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-10 15:45:52,369 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741869_1045{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:45:52,377 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741870_1046{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:45:52,378 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-10 15:45:52,596 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-7665244793428129587.jar 2016-08-10 15:45:53,790 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4872302097553780974.jar 2016-08-10 15:45:54,187 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-1233116595524829470.jar 2016-08-10 15:45:54,209 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-5756043433702267942.jar 2016-08-10 15:45:55,442 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-1577969087461570722.jar 2016-08-10 15:45:55,443 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-10 15:45:55,443 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-10 15:45:55,444 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-10 15:45:55,444 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:45:55,444 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-10 15:45:55,445 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-10 15:45:55,445 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:55,446 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:55,446 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:45:55,446 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:55,447 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:45:55,447 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:45:55,448 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:45:55,492 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:45:55,502 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-10 15:45:55,760 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1470869140101_ns2_test-14708691290511' hfile list 2016-08-10 15:45:55,767 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(629): export split=0 size=11.8 K 2016-08-10 15:45:56,285 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:45:56,293 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(181): Using bufferSize=128 M 2016-08-10 15:45:56,319 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(414): copy completed for input=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 output=hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/archive/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 2016-08-10 15:45:56,319 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(415): size=12093 (11.8 K) time=0sec 11.533M/sec 2016-08-10 15:45:56,327 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741871_1047{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 12093 2016-08-10 15:45:57,233 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-10 15:45:57,234 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-10 15:45:57,244 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:45:57,244 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1470869140101_ns2_test-14708691290511 finished. 2016-08-10 15:45:57,244 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1470869140719_ns3_test-14708691290512 to hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/ 2016-08-10 15:45:57,245 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(286): Doing SNAPSHOT_COPY 2016-08-10 15:45:57,258 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:56218 inputRoot=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 2016-08-10 15:45:57,271 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:56218 outputRoot=hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512 2016-08-10 15:45:57,273 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-10 15:45:57,288 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741872_1048{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 75 2016-08-10 15:45:57,704 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741873_1049{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 346 2016-08-10 15:45:58,112 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-10 15:45:58,325 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-5559467356131649509.jar 2016-08-10 15:45:59,493 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-2839322366253696445.jar 2016-08-10 15:45:59,884 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-9044531233569012570.jar 2016-08-10 15:45:59,906 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6509032562878558435.jar 2016-08-10 15:46:01,067 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-8120030858299241869.jar 2016-08-10 15:46:01,068 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-10 15:46:01,068 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-10 15:46:01,068 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-10 15:46:01,069 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:46:01,069 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-10 15:46:01,069 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-10 15:46:01,070 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:01,070 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:01,070 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:46:01,071 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:01,071 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:01,071 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:46:01,072 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:46:01,109 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:46:01,118 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-10 15:46:01,378 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1470869140719_ns3_test-14708691290512' hfile list 2016-08-10 15:46:02,761 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-10 15:46:02,764 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-10 15:46:02,770 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:46:02,771 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1470869140719_ns3_test-14708691290512 finished. 2016-08-10 15:46:02,771 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1470869138934_ns1_test-1470869129051 to hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/ 2016-08-10 15:46:02,771 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(286): Doing SNAPSHOT_COPY 2016-08-10 15:46:02,785 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:56218 inputRoot=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9 2016-08-10 15:46:02,797 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:56218 outputRoot=hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051 2016-08-10 15:46:02,800 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-10 15:46:02,812 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741874_1050{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:02,821 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741875_1051{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:02,823 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-10 15:46:03,025 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-3191392966432934629.jar 2016-08-10 15:46:04,161 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-8016999848390920081.jar 2016-08-10 15:46:04,548 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-10943018803206785.jar 2016-08-10 15:46:04,568 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-8458935837267721019.jar 2016-08-10 15:46:05,605 DEBUG [10.22.16.34,56228,1470869104167_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:46:05,736 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-327682596212765050.jar 2016-08-10 15:46:05,737 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-10 15:46:05,737 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-10 15:46:05,738 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-10 15:46:05,738 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:46:05,738 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-10 15:46:05,739 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-10 15:46:05,739 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:05,739 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:05,740 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:46:05,740 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:05,741 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.1/hadoop-common-2.7.1.jar 2016-08-10 15:46:05,741 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:46:05,741 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:46:05,783 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:46:05,792 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-10 15:46:05,895 INFO [10.22.16.34,56226,1470869103454_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1a6a2c96 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:05,899 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1a6a2c960x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:05,900 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4da40fec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:05,900 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:05,900 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:05,900 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-10 15:46:05,901 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1a6a2c96-0x15676a15116000f connected 2016-08-10 15:46:05,903 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:05,903 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56382; # active connections: 8 2016-08-10 15:46:05,904 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:05,906 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56382 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:05,909 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:05,910 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56383; # active connections: 4 2016-08-10 15:46:05,910 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:05,910 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56383 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:05,913 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:46:05,914 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:46:05,914 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:46:05,915 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:46:05,915 INFO [10.22.16.34,56226,1470869103454_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116000f 2016-08-10 15:46:05,916 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:05,916 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-1770276590) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:46:05,916 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-657604490) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:05,916 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56383 because read count=-1. Number of active connections: 4 2016-08-10 15:46:05,916 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56382 because read count=-1. Number of active connections: 8 2016-08-10 15:46:05,919 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:46:06,049 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1470869138934_ns1_test-1470869129051' hfile list 2016-08-10 15:46:06,051 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(629): export split=0 size=11.8 K 2016-08-10 15:46:06,288 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:46:06,486 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(181): Using bufferSize=128 M 2016-08-10 15:46:06,545 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(414): copy completed for input=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 output=hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/archive/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 2016-08-10 15:46:06,546 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(415): size=12093 (11.8 K) time=0sec 5.766M/sec 2016-08-10 15:46:06,555 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741876_1052{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:07,388 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/test-14708691290512/8229c2c41c671b66ea383beee31266e1/f 2016-08-10 15:46:07,388 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/info 2016-08-10 15:46:07,389 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/table 2016-08-10 15:46:07,389 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/test-14708691290513/066be6466168f97a0986d6b8bafdb971/f 2016-08-10 15:46:07,391 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/meta 2016-08-10 15:46:07,392 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/session 2016-08-10 15:46:07,393 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/info 2016-08-10 15:46:07,451 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-10 15:46:07,453 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-10 15:46:07,461 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:46:07,461 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1470869138934_ns1_test-1470869129051 finished. 2016-08-10 15:46:07,461 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(458): Add incremental backup table set to hbase:backup. ROOT=hdfs://localhost:56218/backupUT tables [ns4:test-14708691290513 ns2:test-14708691290511 ns3:test-14708691290512 ns1:test-1470869129051] 2016-08-10 15:46:07,461 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns4:test-14708691290513 2016-08-10 15:46:07,461 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns2:test-14708691290511 2016-08-10 15:46:07,461 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns3:test-14708691290512 2016-08-10 15:46:07,461 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns1:test-1470869129051 2016-08-10 15:46:07,463 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:07,571 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(337): write RS log time stamps to hbase:backup for tables [ns4:test-14708691290513,ns2:test-14708691290511,ns3:test-14708691290512,ns1:test-1470869129051] 2016-08-10 15:46:07,579 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:07,581 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:56218/backupUT 2016-08-10 15:46:07,584 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(205): write backup start code to hbase:backup 1470869107339 2016-08-10 15:46:07,585 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:07,589 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:07,589 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869137937 2016-08-10 15:46:07,589 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-10 15:46:07,596 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869137937 FULL 2016-08-10 15:46:07,607 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741877_1053{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:07,607 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513/.backup.manifest 2016-08-10 15:46:07,607 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:07,607 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869137937 2016-08-10 15:46:07,607 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-10 15:46:07,608 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869137937 FULL 2016-08-10 15:46:07,613 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741878_1054{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:07,614 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:07,614 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:07,614 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869137937 2016-08-10 15:46:07,614 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-10 15:46:07,614 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869137937 FULL 2016-08-10 15:46:07,619 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741879_1055{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:07,620 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:07,620 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:07,620 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869137937 2016-08-10 15:46:07,620 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-10 15:46:07,620 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869137937 FULL 2016-08-10 15:46:07,626 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741880_1056{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 170 2016-08-10 15:46:08,032 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:08,032 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(439): in-fly convert code here, provided by future jira 2016-08-10 15:46:08,033 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(447): Backup backup_1470869137937 finished: type=FULL,tablelist=ns4:test-14708691290513;ns2:test-14708691290511;ns3:test-14708691290512;ns1:test-1470869129051,targetRootDir=hdfs://localhost:56218/backupUT,startts=1470869138143,completets=1470869167586,bytescopied=0 2016-08-10 15:46:08,033 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1470869137937 set status=COMPLETE 2016-08-10 15:46:08,035 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:08,037 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(154): Trying to delete snapshot for full backup. 2016-08-10 15:46:08,037 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:46:08,040 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1470869141341_ns4_test-14708691290513 2016-08-10 15:46:08,041 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741864_1040 127.0.0.1:56219 2016-08-10 15:46:08,041 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741866_1042 127.0.0.1:56219 2016-08-10 15:46:08,042 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1470869141341_ns4_test-14708691290513 for backup backup_1470869137937 succeeded. 2016-08-10 15:46:08,042 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:46:08,045 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1470869140101_ns2_test-14708691290511 2016-08-10 15:46:08,046 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741857_1033 127.0.0.1:56219 2016-08-10 15:46:08,046 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741860_1036 127.0.0.1:56219 2016-08-10 15:46:08,046 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1470869140101_ns2_test-14708691290511 for backup backup_1470869137937 succeeded. 2016-08-10 15:46:08,046 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:46:08,049 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1470869140719_ns3_test-14708691290512 2016-08-10 15:46:08,050 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741861_1037 127.0.0.1:56219 2016-08-10 15:46:08,050 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741863_1039 127.0.0.1:56219 2016-08-10 15:46:08,050 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1470869140719_ns3_test-14708691290512 for backup backup_1470869137937 succeeded. 2016-08-10 15:46:08,050 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:46:08,053 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1470869138934_ns1_test-1470869129051 2016-08-10 15:46:08,054 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741853_1029 127.0.0.1:56219 2016-08-10 15:46:08,054 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741856_1032 127.0.0.1:56219 2016-08-10 15:46:08,054 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1470869138934_ns1_test-1470869129051 for backup backup_1470869137937 succeeded. 2016-08-10 15:46:08,055 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(462): Backup backup_1470869137937 completed. 2016-08-10 15:46:08,163 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:562260000000001 2016-08-10 15:46:08,163 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 30.1000sec: FullTableBackupProcedure (targetRootDir=hdfs://localhost:56218/backupUT; backupId=backup_1470869137937; tables=ns1:test-1470869129051,ns2:test-14708691290511,ns3:test-14708691290512,ns4:test-14708691290513) id=13 state=FINISHED 2016-08-10 15:46:09,377 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741856_1032, blk_1073741857_1033, blk_1073741860_1036, blk_1073741861_1037, blk_1073741863_1039, blk_1073741864_1040, blk_1073741866_1042, blk_1073741853_1029] 2016-08-10 15:46:10,753 DEBUG [10.22.16.34,56262,1470869110526_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:46:10,789 DEBUG [10.22.16.34,56266,1470869110579_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:46:10,968 DEBUG [10.22.16.34,56262,1470869110526_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/meta 2016-08-10 15:46:10,968 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/info 2016-08-10 15:46:10,969 DEBUG [10.22.16.34,56262,1470869110526_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/session 2016-08-10 15:46:10,969 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/table 2016-08-10 15:46:10,970 DEBUG [10.22.16.34,56262,1470869110526_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/info 2016-08-10 15:46:16,291 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-10 15:46:16,292 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1470869137937 2016-08-10 15:46:16,298 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:16,298 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56400; # active connections: 4 2016-08-10 15:46:16,299 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:16,299 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56400 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:16,300 DEBUG [main] backup.TestIncrementalBackup(64): writing 199 rows to ns1:test-1470869129051 2016-08-10 15:46:16,305 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:16,305 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56401; # active connections: 5 2016-08-10 15:46:16,306 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:16,306 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56401 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:16,306 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,309 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,311 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,313 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,315 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,317 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,318 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,320 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,321 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,322 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,324 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,325 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,326 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,328 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,329 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,331 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,332 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,334 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,335 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,337 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,338 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,339 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,341 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,342 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,343 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,345 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,346 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,348 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,349 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,351 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,352 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,353 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,355 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,356 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,358 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,359 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,360 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,362 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,363 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,365 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,366 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,367 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,369 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,370 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,371 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,373 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,374 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,376 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,377 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,379 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,380 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,382 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,383 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,385 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,386 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,388 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,389 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,391 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,392 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,393 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,395 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,396 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,397 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,399 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,400 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,401 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,402 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,404 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,405 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,406 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,408 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,410 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,412 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,413 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,414 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,416 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,417 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,418 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,420 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,421 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,422 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,424 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,425 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,426 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,428 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,429 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,430 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,432 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,433 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,434 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,436 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,437 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,438 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,440 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,441 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,442 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,443 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,445 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,446 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,447 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,448 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,450 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,451 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,452 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,453 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,455 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,456 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,458 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,459 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,461 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,462 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,464 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,466 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,468 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,469 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,471 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,472 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,474 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,475 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,477 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,478 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,480 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,481 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,482 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,484 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,485 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,487 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,488 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,490 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,492 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,493 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,494 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,496 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,497 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,499 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,500 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,502 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,503 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,504 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,506 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,507 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,509 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,511 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,512 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,513 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,515 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,516 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,517 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,519 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,520 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,522 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,523 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,525 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,526 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,528 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,530 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,531 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,533 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,534 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,536 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,537 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,538 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,540 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,541 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,542 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,543 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,545 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,546 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,547 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,549 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,550 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,552 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,553 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,555 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,556 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,557 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,559 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,560 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,561 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,563 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,564 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,566 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,567 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,569 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,570 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,571 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,573 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,574 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,575 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,577 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,578 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,579 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,581 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,582 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,583 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,585 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,586 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,588 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,589 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:16,629 DEBUG [main] backup.TestIncrementalBackup(75): written 199 rows to ns1:test-1470869129051 2016-08-10 15:46:16,633 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:16,635 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:16,638 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:16,640 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:16,642 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:16,659 DEBUG [main] backup.TestIncrementalBackup(87): written 199 rows to ns2:test-14708691290511 2016-08-10 15:46:16,660 INFO [main] util.BackupClientUtil(105): Using existing backup root dir: hdfs://localhost:56218/backupUT 2016-08-10 15:46:16,664 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-10 15:46:16,665 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.HMaster(2641): Incremental backup for the following table set: [ns1:test-1470869129051, ns2:test-14708691290511, ns3:test-14708691290512, ns4:test-14708691290513] 2016-08-10 15:46:16,670 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x144a74ce connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:16,674 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x144a74ce0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:16,675 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d2792a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:16,675 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:16,675 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:16,675 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] backup.BackupInfo(125): CreateBackupContext: 4 ns1:test-1470869129051 2016-08-10 15:46:16,676 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x144a74ce-0x15676a151160010 connected 2016-08-10 15:46:16,784 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:56218/backupUT; backupId=backup_1470869176664; tables=ns1:test-1470869129051,ns2:test-14708691290511,ns3:test-14708691290512,ns4:test-14708691290513) id=14 state=RUNNABLE:PREPARE_INCREMENTAL added to the store. 2016-08-10 15:46:16,786 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:16,787 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:562260000000002 2016-08-10 15:46:16,787 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(130): Backup backup_1470869176664 started at 1470869176787. 2016-08-10 15:46:16,788 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1470869176664 set status=RUNNING 2016-08-10 15:46:16,791 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:16,791 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56405; # active connections: 8 2016-08-10 15:46:16,791 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:16,792 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56405 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:16,795 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:16,796 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56406; # active connections: 6 2016-08-10 15:46:16,796 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:16,797 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56406 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:16,797 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:16,798 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(134): Backup session backup_1470869176664 has been started. 2016-08-10 15:46:16,799 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-10 15:46:16,800 DEBUG [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(216): For incremental backup, current table set is [ns1:test-1470869129051, ns2:test-14708691290511, ns3:test-14708691290512, ns4:test-14708691290513] 2016-08-10 15:46:16,801 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(180): read backup start code from hbase:backup 2016-08-10 15:46:16,802 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:56218/backupUT 2016-08-10 15:46:16,805 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(93): StartCode 1470869107339for backupID backup_1470869176664 2016-08-10 15:46:16,805 INFO [ProcedureExecutor-5] impl.IncrementalBackupManager(104): Execute roll log procedure for incremental backup ... 2016-08-10 15:46:16,809 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:16,810 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56407; # active connections: 9 2016-08-10 15:46:16,810 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:16,811 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56407 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:16,813 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(652): Client=tyu//10.22.16.34 procedure request for: rolllog-proc 2016-08-10 15:46:16,813 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2016-08-10 15:46:16,813 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2016-08-10 15:46:16,813 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-10 15:46:16,814 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2016-08-10 15:46:16,814 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2016-08-10 15:46:16,814 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-10 15:46:16,814 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2016-08-10 15:46:16,815 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:46:16,815 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,815 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:46:16,815 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:46:16,815 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:46:16,815 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:46:16,816 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:46:16,816 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,816 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,816 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-10 15:46:16,816 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-10 15:46:16,816 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,816 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-10 15:46:16,816 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-10 15:46:16,816 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-10 15:46:16,816 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-10 15:46:16,817 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-10 15:46:16,817 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-10 15:46:16,817 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-10 15:46:16,817 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-10 15:46:16,817 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-10 15:46:16,817 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(53): Constructing a LogRollBackupSubprocedure. 2016-08-10 15:46:16,817 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-10 15:46:16,817 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(53): Constructing a LogRollBackupSubprocedure. 2016-08-10 15:46:16,817 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-10 15:46:16,817 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-10 15:46:16,817 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-10 15:46:16,817 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-10 15:46:16,818 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-10 15:46:16,818 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-10 15:46:16,818 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-10 15:46:16,818 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure (rolllog) in zk 2016-08-10 15:46:16,818 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-10 15:46:16,818 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-10 15:46:16,818 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure (rolllog) in zk 2016-08-10 15:46:16,819 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,819 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,819 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,819 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,819 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,819 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:46:16,819 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:46:16,819 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,819 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,819 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:46:16,819 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:46:16,819 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,819 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-10 15:46:16,820 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:16,820 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,820 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,820 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:46:16,820 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56226,1470869103454' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@494c932d[Count = 1] remaining members to acquire global barrier 2016-08-10 15:46:16,821 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,821 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:46:16,821 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:16,822 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,822 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,822 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:46:16,822 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:46:16,822 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.16.34,56228,1470869104167' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-10 15:46:16,822 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@494c932d[Count = 0] remaining members to acquire global barrier 2016-08-10 15:46:16,822 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2016-08-10 15:46:16,823 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,823 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,823 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,823 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,823 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,823 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,823 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,823 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,823 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-10 15:46:16,823 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,824 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:46:16,824 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:46:16,824 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-10 15:46:16,824 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(72): ++ DRPC started: 10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,824 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(77): Trying to roll log in backup subprocedure, current log number: 1470869138221 2016-08-10 15:46:16,824 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,824 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-10 15:46:16,824 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-10 15:46:16,824 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:46:16,824 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(72): ++ DRPC started: 10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,824 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-10 15:46:16,824 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(77): Trying to roll log in backup subprocedure, current log number: 1470869138221 2016-08-10 15:46:16,824 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:16,825 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:16,826 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:16,826 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:46:16,826 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:46:16,827 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:16,827 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869176824 2016-08-10 15:46:16,827 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:16,829 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] wal.FSHLog(665): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:46:16,833 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:46:16,833 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:46:16,837 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741851_1027{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 91 2016-08-10 15:46:16,838 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741852_1028{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 91 2016-08-10 15:46:16,892 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:17,097 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:17,243 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] wal.FSHLog(885): Rolled WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:46:17,243 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] wal.FSHLog(885): Rolled WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869176824 2016-08-10 15:46:17,244 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] wal.FSHLog(952): Archiving hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:46:17,244 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] wal.FSHLog(952): Archiving hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:46:17,246 INFO [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): After roll log in backup subprocedure, current log number: 1470869176825 2016-08-10 15:46:17,246 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-10 15:46:17,247 INFO [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): After roll log in backup subprocedure, current log number: 1470869176824 2016-08-10 15:46:17,247 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-10 15:46:17,248 DEBUG [rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-10 15:46:17,249 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:17,250 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-10 15:46:17,250 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.16.34,56228,1470869104167' in zk 2016-08-10 15:46:17,250 DEBUG [rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-10 15:46:17,251 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:17,252 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-10 15:46:17,252 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.16.34,56226,1470869103454' in zk 2016-08-10 15:46:17,253 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,253 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-10 15:46:17,253 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:46:17,253 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,254 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,254 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,254 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:46:17,254 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:46:17,253 DEBUG [member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-10 15:46:17,254 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-10 15:46:17,254 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:46:17,254 DEBUG [member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-10 15:46:17,254 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:46:17,255 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:17,255 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,256 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,256 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:46:17,256 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:46:17,256 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:17,257 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,257 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,257 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.16.34,56228,1470869104167': 2016-08-10 15:46:17,257 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56228,1470869104167' released barrier for procedure'rolllog', counting down latch. Waiting for 1 more 2016-08-10 15:46:17,258 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,258 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,258 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,258 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,258 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:46:17,258 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:46:17,258 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:46:17,258 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:17,258 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,259 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,259 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:46:17,259 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:46:17,259 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:17,259 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,260 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,260 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.16.34,56226,1470869103454': 2016-08-10 15:46:17,260 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.16.34,56226,1470869103454' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2016-08-10 15:46:17,260 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2016-08-10 15:46:17,260 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-10 15:46:17,260 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-10 15:46:17,260 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:rolllog 2016-08-10 15:46:17,260 INFO [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-10 15:46:17,261 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,261 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,261 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,261 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,261 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,261 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,262 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-10 15:46:17,262 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-10 15:46:17,262 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:46:17,262 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,262 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-10 15:46:17,262 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-10 15:46:17,262 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,262 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,262 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-10 15:46:17,262 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,262 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:17,263 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,263 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,263 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-10 15:46:17,264 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,264 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:17,264 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,264 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-10 15:46:17,264 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-10 15:46:17,265 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,265 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,266 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:46:17,266 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:46:17,266 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:46:17,267 DEBUG [(10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-10 15:46:17,267 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Unable to get data of znode /1/rolllog-proc/abort/rolllog because node does not exist (not an error) 2016-08-10 15:46:17,267 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-10 15:46:17,267 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-10 15:46:17,267 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:46:17,267 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.LogRollMasterProcedureManager(116): Done waiting - exec procedure for rolllog 2016-08-10 15:46:17,267 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.LogRollMasterProcedureManager(117): Distributed roll log procedure is successful! 2016-08-10 15:46:17,267 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-10 15:46:17,267 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-10 15:46:17,267 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-10 15:46:17,267 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-10 15:46:17,268 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-10 15:46:17,268 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56226,1470869103454 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.16.34,56228,1470869104167 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-10 15:46:17,268 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-10 15:46:17,269 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2481): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 857 ms per retry) 2016-08-10 15:46:17,269 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2490): (#1) Sleeping: 100ms while waiting for procedure completion. 2016-08-10 15:46:17,374 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2496): Getting current status of procedure from master... 2016-08-10 15:46:17,381 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(904): Checking to see if procedure from request:rolllog-proc is done 2016-08-10 15:46:17,383 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-10 15:46:17,387 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(215): In getLogFilesForNewBackup() olderTimestamps: {10.22.16.34:56226=1470869107339, 10.22.16.34:56228=1470869107985} newestTimestamps: {10.22.16.34:56226=1470869138221, 10.22.16.34:56228=1470869138221} 2016-08-10 15:46:17,389 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869176824 2016-08-10 15:46:17,390 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:46:17,390 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): excluding hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 1470869108161 <= 1470869138221 2016-08-10 15:46:17,390 WARN [ProcedureExecutor-5] wal.DefaultWALProvider(349): Cannot parse a server name from path=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta; Not a host:port pair: 10.22.16.34,56226,1470869103454.meta 2016-08-10 15:46:17,390 WARN [ProcedureExecutor-5] util.BackupServerUtil(237): Skip log file (can't parse): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta 2016-08-10 15:46:17,391 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:46:17,391 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:17,391 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): excluding hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 1470869110496 <= 1470869138221 2016-08-10 15:46:17,391 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:17,391 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): excluding hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 1470869132540 <= 1470869138221 2016-08-10 15:46:17,391 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:17,391 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): excluding hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 1470869134197 <= 1470869138221 2016-08-10 15:46:17,392 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(316): excluding old hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 1470869107339 <= 1470869107339 2016-08-10 15:46:17,392 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(316): excluding old hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 1470869107985 <= 1470869107985 2016-08-10 15:46:17,393 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(500): get WAL files from hbase:backup 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:56218/backupUT/backup_1470869137937/hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:56218/backupUT/backup_1470869137937/hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] backup.BackupInfo(313): setting incr backup file list 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:46:17,398 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:46:17,400 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:17,508 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x76f13999 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:17,512 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x76f139990x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:17,513 DEBUG [ProcedureExecutor-5] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43306226, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:17,513 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:17,513 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:17,514 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x76f13999-0x15676a151160011 connected 2016-08-10 15:46:17,516 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:17,516 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56411; # active connections: 10 2016-08-10 15:46:17,517 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:17,517 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56411 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:17,518 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns4:test-14708691290513 2016-08-10 15:46:17,529 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741883_1059{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:17,532 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/backupUT/backup_1470869176664/ns4/test-14708691290513/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:17,532 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-10 15:46:17,533 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x76f13999 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:17,536 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x76f139990x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:17,538 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x76f13999-0x15676a151160012 connected 2016-08-10 15:46:17,539 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns4:test-14708691290513 2016-08-10 15:46:17,546 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741884_1060{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 50 2016-08-10 15:46:17,903 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:17,951 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns4:test-14708691290513 2016-08-10 15:46:17,953 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns2:test-14708691290511 2016-08-10 15:46:17,965 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741885_1061{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 295 2016-08-10 15:46:18,372 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/backupUT/backup_1470869176664/ns2/test-14708691290511/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:18,373 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-10 15:46:18,373 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x76f13999 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:18,377 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x76f139990x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:18,387 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns2:test-14708691290511 2016-08-10 15:46:18,387 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x76f13999-0x15676a151160013 connected 2016-08-10 15:46:18,393 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741886_1062{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:18,394 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns2:test-14708691290511 2016-08-10 15:46:18,395 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns3:test-14708691290512 2016-08-10 15:46:18,404 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741887_1063{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:18,407 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/backupUT/backup_1470869176664/ns3/test-14708691290512/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:18,408 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-10 15:46:18,408 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x76f13999 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:18,410 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x76f139990x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:18,412 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x76f13999-0x15676a151160014 connected 2016-08-10 15:46:18,413 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns3:test-14708691290512 2016-08-10 15:46:18,419 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741888_1064{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:18,419 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns3:test-14708691290512 2016-08-10 15:46:18,420 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns1:test-1470869129051 2016-08-10 15:46:18,430 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741889_1065{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 294 2016-08-10 15:46:18,840 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/backupUT/backup_1470869176664/ns1/test-1470869129051/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:18,841 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-10 15:46:18,841 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x76f13999 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:18,845 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x76f139990x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:18,847 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns1:test-1470869129051 2016-08-10 15:46:18,847 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x76f13999-0x15676a151160015 connected 2016-08-10 15:46:18,855 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741890_1066{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:18,855 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns1:test-1470869129051 2016-08-10 15:46:18,856 INFO [ProcedureExecutor-5] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160011 2016-08-10 15:46:18,856 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:18,857 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(125): Incremental copy is starting. 2016-08-10 15:46:18,857 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56411 because read count=-1. Number of active connections: 10 2016-08-10 15:46:18,857 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1497094290) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:18,860 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(292): Doing COPY_TYPE_DISTCP 2016-08-10 15:46:18,886 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(301): DistCp options: [hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161, hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496, hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540, hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197, hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221, hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221, hdfs://localhost:56218/backupUT/backup_1470869176664/WALs] 2016-08-10 15:46:18,906 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:18,970 WARN [ProcedureExecutor-5] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:46:19,171 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741891_1067{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:19,174 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741891_1067 127.0.0.1:56219 2016-08-10 15:46:19,174 ERROR [LocalJobRunner Map Task Executor #0] util.RetriableCommand(89): Failure in Retriable command: Copying hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 java.io.IOException: Mismatch in length of source:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 and target:hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.distcp.tmp.attempt_local1643260168_0005_m_000000_0 at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:282) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:20,911 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:21,390 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741891_1067] 2016-08-10 15:46:21,654 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741892_1068{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 973 2016-08-10 15:46:22,062 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741892_1068 127.0.0.1:56219 2016-08-10 15:46:22,062 ERROR [LocalJobRunner Map Task Executor #0] util.RetriableCommand(89): Failure in Retriable command: Copying hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 java.io.IOException: Mismatch in length of source:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 and target:hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.distcp.tmp.attempt_local1643260168_0005_m_000000_0 at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:282) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:24,230 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741893_1069{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:24,232 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741893_1069 127.0.0.1:56219 2016-08-10 15:46:24,232 ERROR [LocalJobRunner Map Task Executor #0] util.RetriableCommand(89): Failure in Retriable command: Copying hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 java.io.IOException: Mismatch in length of source:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 and target:hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.distcp.tmp.attempt_local1643260168_0005_m_000000_0 at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:282) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:24,232 ERROR [LocalJobRunner Map Task Executor #0] mapred.CopyMapper(313): Failure in copying hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 java.io.IOException: File copy failed: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 --> hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:282) ... 11 more Caused by: java.io.IOException: Mismatch in length of source:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 and target:hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.distcp.tmp.attempt_local1643260168_0005_m_000000_0 at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) ... 12 more 2016-08-10 15:46:24,236 WARN [Thread-2327] mapred.LocalJobRunner$Job(560): job_local1643260168_0005 java.lang.Exception: java.io.IOException: File copy failed: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 --> hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.io.IOException: File copy failed: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 --> hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:282) ... 11 more Caused by: java.io.IOException: Mismatch in length of source:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 and target:hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.distcp.tmp.attempt_local1643260168_0005_m_000000_0 at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) ... 12 more 2016-08-10 15:46:24,391 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741892_1068, blk_1073741893_1069] 2016-08-10 15:46:24,622 INFO [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(242): Progress: 10.0% 2016-08-10 15:46:24,622 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1470869176664 set status=RUNNING 2016-08-10 15:46:24,625 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:24,626 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(135): Backup progress data "10%" has been updated to hbase:backup for backup_1470869176664 2016-08-10 15:46:24,626 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(250): Backup progress data updated to hbase:backup: "Progress: 10.0% - 514 bytes copied." 2016-08-10 15:46:24,626 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(262): DistCp job-id: job_local1643260168_0005 2016-08-10 15:46:24,631 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(176): Incremental copy from hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs finished. 2016-08-10 15:46:24,631 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(480): add WAL files to hbase:backup: backup_1470869176664 hdfs://localhost:56218/backupUT files [hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221,hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221] 2016-08-10 15:46:24,631 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:46:24,631 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:24,631 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:24,631 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:24,631 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:46:24,631 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:46:24,633 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:24,744 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:56218/backupUT 2016-08-10 15:46:24,748 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(337): write RS log time stamps to hbase:backup for tables [ns4:test-14708691290513,ns2:test-14708691290511,ns3:test-14708691290512,ns1:test-1470869129051] 2016-08-10 15:46:24,750 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:24,752 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:56218/backupUT 2016-08-10 15:46:24,755 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(205): write backup start code to hbase:backup 1470869138221 2016-08-10 15:46:24,756 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:24,757 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:24,757 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:24,757 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:24,757 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:24,762 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:24,762 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869176664 INCREMENTAL 2016-08-10 15:46:24,763 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:24,763 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:24,763 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:24,766 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:24,773 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741894_1070{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 814 2016-08-10 15:46:24,917 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:25,179 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869176664/ns4/test-14708691290513/.backup.manifest 2016-08-10 15:46:25,179 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:25,179 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:25,179 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:25,179 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:25,184 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:25,184 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869176664 INCREMENTAL 2016-08-10 15:46:25,184 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:25,184 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:25,184 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:25,187 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:25,194 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741895_1071{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:25,195 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869176664/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:25,195 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:25,195 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:25,195 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:25,195 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:25,198 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:25,198 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869176664 INCREMENTAL 2016-08-10 15:46:25,198 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:25,198 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:25,198 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:25,201 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:25,210 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741896_1072{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:25,211 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869176664/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:25,211 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-10 15:46:25,211 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:25,211 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:25,211 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:25,214 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:25,214 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869176664 INCREMENTAL 2016-08-10 15:46:25,214 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:25,214 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:25,214 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:25,216 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:25,222 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741897_1073{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:25,222 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869176664/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:25,222 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 4 tables exist in table set. 2016-08-10 15:46:25,222 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1470869176664 2016-08-10 15:46:25,222 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-10 15:46:25,222 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-10 15:46:25,226 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-10 15:46:25,226 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:56218/backupUT backup_1470869176664 INCREMENTAL 2016-08-10 15:46:25,232 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741898_1074{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:25,233 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.backup.manifest 2016-08-10 15:46:25,233 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(439): in-fly convert code here, provided by future jira 2016-08-10 15:46:25,233 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(447): Backup backup_1470869176664 finished: type=INCREMENTAL,tablelist=ns4:test-14708691290513;ns2:test-14708691290511;ns3:test-14708691290512;ns1:test-1470869129051,targetRootDir=hdfs://localhost:56218/backupUT,startts=1470869176787,completets=1470869184757,bytescopied=0 2016-08-10 15:46:25,233 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1470869176664 set status=COMPLETE 2016-08-10 15:46:25,234 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:25,236 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(462): Backup backup_1470869176664 completed. 2016-08-10 15:46:25,345 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:562260000000002 2016-08-10 15:46:25,346 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 8.5600sec: IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:56218/backupUT; backupId=backup_1470869176664; tables=ns1:test-1470869129051,ns2:test-14708691290511,ns3:test-14708691290512,ns4:test-14708691290513) id=14 state=FINISHED 2016-08-10 15:46:34,924 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-10 15:46:34,925 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1470869176664 2016-08-10 15:46:34,933 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:34,937 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:34,938 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:34,938 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:34,941 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:34,941 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:34,942 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:34,944 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:34,944 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:34,945 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513/.backup.manifest 2016-08-10 15:46:34,948 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:34,948 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns4/test-14708691290513/.backup.manifest 2016-08-10 15:46:34,949 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x151c24dd connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:34,953 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x151c24dd0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:34,954 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7aff0999, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:34,954 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:34,954 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:34,955 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x151c24dd-0x15676a151160016 connected 2016-08-10 15:46:34,957 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:34,957 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56444; # active connections: 10 2016-08-10 15:46:34,957 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:34,958 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56444 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:34,958 INFO [main] impl.RestoreClientImpl(167): HBase table ns1:table1_restore does not exist. It will be created during restore process 2016-08-10 15:46:34,959 INFO [main] impl.RestoreClientImpl(167): HBase table ns2:table2_restore does not exist. It will be created during restore process 2016-08-10 15:46:34,960 INFO [main] impl.RestoreClientImpl(167): HBase table ns3:table3_restore does not exist. It will be created during restore process 2016-08-10 15:46:34,961 INFO [main] impl.RestoreClientImpl(167): HBase table ns4:table4_restore does not exist. It will be created during restore process 2016-08-10 15:46:34,961 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160016 2016-08-10 15:46:34,961 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:34,965 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-1009958274) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:34,965 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56444 because read count=-1. Number of active connections: 10 2016-08-10 15:46:34,965 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-10 15:46:34,968 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:34,972 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:34,972 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:34,972 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1470869129051' to 'ns1:table1_restore' from full backup image hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051 2016-08-10 15:46:34,983 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1f3d7115 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:34,986 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1f3d71150x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:34,986 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ce67e38, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:34,986 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:34,986 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:34,987 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1f3d7115-0x15676a151160017 connected 2016-08-10 15:46:34,988 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:34,988 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56448; # active connections: 10 2016-08-10 15:46:34,989 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:34,989 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56448 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:34,990 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns1:table1_restore' 2016-08-10 15:46:34,990 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/archive/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426 2016-08-10 15:46:34,991 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/archive/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f in region [hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/archive/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426] 2016-08-10 15:46:34,992 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:34,994 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/archive/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 first=row0 last=row99 2016-08-10 15:46:35,002 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:35,002 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56449; # active connections: 11 2016-08-10 15:46:35,003 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:35,003 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56449 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:35,004 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:46:35,110 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:46:35,114 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-10 15:46:35,115 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:562260000000000 2016-08-10 15:46:35,218 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-10 15:46:35,232 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741899_1075{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:35,237 DEBUG [ProcedureExecutor-6] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:35,238 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns1:table1_restore 2016-08-10 15:46:35,245 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741900_1076{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:35,245 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:35,246 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.: disabling compactions & flushes 2016-08-10 15:46:35,246 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:35,246 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:35,353 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507."} 2016-08-10 15:46:35,354 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:35,355 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:46:35,424 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-10 15:46:35,460 INFO [ProcedureExecutor-6] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:46:35,461 ERROR [ProcedureExecutor-6] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:46:35,461 INFO [ProcedureExecutor-6] master.RegionStates(1106): Transition {3d6498df4d520f901c490789b272c507 state=OFFLINE, ts=1470869195460, server=null} to {3d6498df4d520f901c490789b272c507 state=PENDING_OPEN, ts=1470869195461, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:35,461 INFO [ProcedureExecutor-6] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:46:35,462 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:35,463 INFO [PriorityRpcServer.handler=3,queue=1,port=56228] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:35,469 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(6339): Opening region: {ENCODED => 3d6498df4d520f901c490789b272c507, NAME => 'ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:46:35,470 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore 3d6498df4d520f901c490789b272c507 2016-08-10 15:46:35,470 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:35,473 INFO [StoreOpener-3d6498df4d520f901c490789b272c507-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:35,473 INFO [StoreOpener-3d6498df4d520f901c490789b272c507-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:46:35,474 DEBUG [StoreOpener-3d6498df4d520f901c490789b272c507-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f 2016-08-10 15:46:35,474 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507 2016-08-10 15:46:35,479 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:46:35,479 INFO [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(871): Onlined 3d6498df4d520f901c490789b272c507; next sequenceid=2 2016-08-10 15:46:35,479 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:35,480 INFO [PostOpenDeployTasks:3d6498df4d520f901c490789b272c507] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:35,481 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {3d6498df4d520f901c490789b272c507 state=PENDING_OPEN, ts=1470869195461, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:46:35,481 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(1106): Transition {3d6498df4d520f901c490789b272c507 state=PENDING_OPEN, ts=1470869195461, server=10.22.16.34,56228,1470869104167} to {3d6498df4d520f901c490789b272c507 state=OPEN, ts=1470869195481, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:35,481 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:46:35,481 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:35,482 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(452): Onlined 3d6498df4d520f901c490789b272c507 on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:35,482 DEBUG [ProcedureExecutor-6] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:46:35,482 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869195482,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-10 15:46:35,482 ERROR [B.defaultRpcServer.handler=4,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:35,483 DEBUG [PostOpenDeployTasks:3d6498df4d520f901c490789b272c507] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:35,484 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:35,484 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:35,485 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-10 15:46:35,726 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-10 15:46:35,816 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:562260000000000 2016-08-10 15:46:35,816 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 702msec: CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=FINISHED 2016-08-10 15:46:36,232 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-10 15:46:36,233 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns1:table1_restore completed 2016-08-10 15:46:36,233 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:36,233 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160017 2016-08-10 15:46:36,236 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:36,239 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-405419843) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:36,239 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:56218; local cluster node: hdfs://localhost:56218 2016-08-10 15:46:36,239 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/archive/data/ns1/test-1470869129051 on local cluster, back it up before restore 2016-08-10 15:46:36,239 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-305298285) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:36,239 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56448 because read count=-1. Number of active connections: 11 2016-08-10 15:46:36,239 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56449 because read count=-1. Number of active connections: 11 2016-08-10 15:46:36,254 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741901_1077{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:36,255 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-10 15:46:36,256 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-10 15:46:36,274 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426 2016-08-10 15:46:36,275 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x66a4f0c5 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:36,278 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x66a4f0c50x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:36,278 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24845de0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:36,279 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:36,279 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:36,279 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x66a4f0c5-0x15676a151160018 connected 2016-08-10 15:46:36,281 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:36,281 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56455; # active connections: 10 2016-08-10 15:46:36,282 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:36,282 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56455 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:36,287 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-10 15:46:36,296 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:36,296 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56456; # active connections: 11 2016-08-10 15:46:36,297 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:36,297 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56456 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:36,313 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:36,316 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 first=row0 last=row99 2016-08-10 15:46:36,326 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507., hostname=10.22.16.34,56228,1470869104167, seqNum=2 for row with hfile group [{[B@f016121,hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090}] 2016-08-10 15:46:36,334 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:36,334 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56457; # active connections: 7 2016-08-10 15:46:36,334 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:36,335 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56457 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:36,335 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(670): Validating hfile at hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 for inclusion in store f region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:36,339 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-10 15:46:36,339 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(684): Region bounds: first= last= 2016-08-10 15:46:36,341 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/eaacd22f29d843e68c5615b77f9bc831_SeqId_4_ 2016-08-10 15:46:36,342 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(742): Loaded HFile hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 into store 'f' as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/eaacd22f29d843e68c5615b77f9bc831_SeqId_4_ - updating store file list. 2016-08-10 15:46:36,347 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(777): Loaded HFile hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/eaacd22f29d843e68c5615b77f9bc831_SeqId_4_ into store 'f 2016-08-10 15:46:36,348 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 into store f (new location: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/eaacd22f29d843e68c5615b77f9bc831_SeqId_4_) 2016-08-10 15:46:36,353 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:36,355 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:36,356 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160018 2016-08-10 15:46:36,358 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:36,359 INFO [main] impl.RestoreClientImpl(292): ns1:test-1470869129051 has been successfully restored to ns1:table1_restore 2016-08-10 15:46:36,359 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (-876629076) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:36,359 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-10 15:46:36,359 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56456 because read count=-1. Number of active connections: 11 2016-08-10 15:46:36,359 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (748112006) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:36,359 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (598204029) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:46:36,359 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56457 because read count=-1. Number of active connections: 7 2016-08-10 15:46:36,359 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56455 because read count=-1. Number of active connections: 11 2016-08-10 15:46:36,359 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869137937 hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/ 2016-08-10 15:46:36,360 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-10 15:46:36,361 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:36,364 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:36,364 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:36,364 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14708691290511' to 'ns2:table2_restore' from full backup image hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511 2016-08-10 15:46:36,373 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x41cbc24 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:36,375 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x41cbc240x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:36,376 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@767d22b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:36,376 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:36,376 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:36,377 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x41cbc24-0x15676a151160019 connected 2016-08-10 15:46:36,379 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:36,379 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56461; # active connections: 10 2016-08-10 15:46:36,380 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:36,380 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56461 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:36,381 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns2:table2_restore' 2016-08-10 15:46:36,381 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/archive/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357 2016-08-10 15:46:36,383 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/archive/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f in region [hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/archive/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357] 2016-08-10 15:46:36,383 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:36,387 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/archive/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 first=row0 last=row99 2016-08-10 15:46:36,389 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:36,389 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56462; # active connections: 11 2016-08-10 15:46:36,389 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:36,390 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56462 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:36,391 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:46:36,498 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:46:36,501 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-10 15:46:36,503 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:562260000000000 2016-08-10 15:46:36,608 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-10 15:46:36,621 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741902_1078{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:36,623 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:36,624 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns2:table2_restore 2016-08-10 15:46:36,631 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741903_1079{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 45 2016-08-10 15:46:36,813 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-10 15:46:37,034 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:37,035 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.: disabling compactions & flushes 2016-08-10 15:46:37,035 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:37,035 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:37,118 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-10 15:46:37,145 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b."} 2016-08-10 15:46:37,147 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:37,148 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:46:37,253 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:46:37,254 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:46:37,254 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {2046092792b2b999d6593fd7d2a8f33b state=OFFLINE, ts=1470869197253, server=null} to {2046092792b2b999d6593fd7d2a8f33b state=PENDING_OPEN, ts=1470869197254, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:37,255 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:46:37,255 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:37,257 INFO [PriorityRpcServer.handler=1,queue=1,port=56228] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:37,262 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(6339): Opening region: {ENCODED => 2046092792b2b999d6593fd7d2a8f33b, NAME => 'ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:46:37,262 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore 2046092792b2b999d6593fd7d2a8f33b 2016-08-10 15:46:37,262 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:37,265 INFO [StoreOpener-2046092792b2b999d6593fd7d2a8f33b-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:37,265 INFO [StoreOpener-2046092792b2b999d6593fd7d2a8f33b-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:46:37,266 DEBUG [StoreOpener-2046092792b2b999d6593fd7d2a8f33b-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f 2016-08-10 15:46:37,267 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b 2016-08-10 15:46:37,271 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:46:37,271 INFO [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(871): Onlined 2046092792b2b999d6593fd7d2a8f33b; next sequenceid=2 2016-08-10 15:46:37,271 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:37,272 INFO [PostOpenDeployTasks:2046092792b2b999d6593fd7d2a8f33b] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:37,273 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {2046092792b2b999d6593fd7d2a8f33b state=PENDING_OPEN, ts=1470869197254, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:46:37,273 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStates(1106): Transition {2046092792b2b999d6593fd7d2a8f33b state=PENDING_OPEN, ts=1470869197254, server=10.22.16.34,56228,1470869104167} to {2046092792b2b999d6593fd7d2a8f33b state=OPEN, ts=1470869197273, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:37,273 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:46:37,273 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:37,274 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStates(452): Onlined 2046092792b2b999d6593fd7d2a8f33b on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:37,274 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:46:37,274 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869197274,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-10 15:46:37,274 ERROR [B.defaultRpcServer.handler=2,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:37,275 DEBUG [PostOpenDeployTasks:2046092792b2b999d6593fd7d2a8f33b] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:37,280 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:37,280 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:37,281 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-10 15:46:37,612 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:562260000000000 2016-08-10 15:46:37,613 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 1.1110sec: CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=FINISHED 2016-08-10 15:46:37,625 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-10 15:46:37,625 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns2:table2_restore completed 2016-08-10 15:46:37,626 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:37,626 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160019 2016-08-10 15:46:37,627 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:37,628 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (51817429) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:37,628 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56462 because read count=-1. Number of active connections: 11 2016-08-10 15:46:37,628 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56461 because read count=-1. Number of active connections: 11 2016-08-10 15:46:37,628 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:56218; local cluster node: hdfs://localhost:56218 2016-08-10 15:46:37,629 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/archive/data/ns2/test-14708691290511 on local cluster, back it up before restore 2016-08-10 15:46:37,628 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (1991690545) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:37,646 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741904_1080{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 12093 2016-08-10 15:46:38,052 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-10 15:46:38,053 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-10 15:46:38,071 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357 2016-08-10 15:46:38,072 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3454e47 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:38,077 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3454e470x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:38,078 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3a9c4709, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:38,078 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:38,078 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:38,079 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3454e47-0x15676a15116001a connected 2016-08-10 15:46:38,080 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:38,080 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56467; # active connections: 10 2016-08-10 15:46:38,081 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:38,081 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56467 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:38,087 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-10 15:46:38,093 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:38,094 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56468; # active connections: 11 2016-08-10 15:46:38,097 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:38,097 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56468 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:38,103 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:38,106 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 first=row0 last=row99 2016-08-10 15:46:38,109 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b., hostname=10.22.16.34,56228,1470869104167, seqNum=2 for row with hfile group [{[B@17548d63,hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56}] 2016-08-10 15:46:38,112 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:38,112 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56469; # active connections: 7 2016-08-10 15:46:38,113 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:38,113 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56469 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:38,113 INFO [B.defaultRpcServer.handler=4,queue=0,port=56228] regionserver.HStore(670): Validating hfile at hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 for inclusion in store f region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:46:38,116 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56228] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-10 15:46:38,116 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56228] regionserver.HStore(684): Region bounds: first= last= 2016-08-10 15:46:38,117 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56228] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/c8dffbf1862546e0bdc352b959d501ee_SeqId_4_ 2016-08-10 15:46:38,118 INFO [B.defaultRpcServer.handler=4,queue=0,port=56228] regionserver.HStore(742): Loaded HFile hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 into store 'f' as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/c8dffbf1862546e0bdc352b959d501ee_SeqId_4_ - updating store file list. 2016-08-10 15:46:38,124 INFO [B.defaultRpcServer.handler=4,queue=0,port=56228] regionserver.HStore(777): Loaded HFile hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/c8dffbf1862546e0bdc352b959d501ee_SeqId_4_ into store 'f 2016-08-10 15:46:38,124 INFO [B.defaultRpcServer.handler=4,queue=0,port=56228] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 into store f (new location: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/c8dffbf1862546e0bdc352b959d501ee_SeqId_4_) 2016-08-10 15:46:38,124 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:46:38,125 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:38,125 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116001a 2016-08-10 15:46:38,128 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:38,128 INFO [main] impl.RestoreClientImpl(292): ns2:test-14708691290511 has been successfully restored to ns2:table2_restore 2016-08-10 15:46:38,129 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56468 because read count=-1. Number of active connections: 11 2016-08-10 15:46:38,129 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-10 15:46:38,129 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869137937 hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/ 2016-08-10 15:46:38,129 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-2136201228) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:46:38,129 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (1976198058) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:38,129 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56467 because read count=-1. Number of active connections: 11 2016-08-10 15:46:38,129 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56469 because read count=-1. Number of active connections: 7 2016-08-10 15:46:38,129 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (1222800853) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:38,129 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-10 15:46:38,130 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:38,132 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:38,133 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:38,133 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14708691290512' to 'ns3:table3_restore' from full backup image hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512 2016-08-10 15:46:38,139 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/archive/data/ns3/test-14708691290512 does not exists 2016-08-10 15:46:38,139 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14708691290512, will only create table 2016-08-10 15:46:38,140 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5509f08c connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:38,141 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5509f08c0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:38,142 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38ebbf56, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:38,142 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:38,142 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:38,143 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5509f08c-0x15676a15116001b connected 2016-08-10 15:46:38,144 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:38,145 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56473; # active connections: 10 2016-08-10 15:46:38,145 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:38,145 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56473 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:38,146 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns3:table3_restore' 2016-08-10 15:46:38,147 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:38,147 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56474; # active connections: 11 2016-08-10 15:46:38,148 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:38,148 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56474 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:38,149 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:46:38,254 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:46:38,257 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-10 15:46:38,259 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:562260000000000 2016-08-10 15:46:38,362 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-10 15:46:38,378 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741905_1081{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:38,380 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:38,381 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns3:table3_restore 2016-08-10 15:46:38,388 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741906_1082{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:38,389 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:46:38,389 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.: disabling compactions & flushes 2016-08-10 15:46:38,390 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:46:38,390 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:46:38,502 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3."} 2016-08-10 15:46:38,503 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:38,504 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:46:38,567 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-10 15:46:38,614 INFO [ProcedureExecutor-1] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:46:38,615 ERROR [ProcedureExecutor-1] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:46:38,615 INFO [ProcedureExecutor-1] master.RegionStates(1106): Transition {eca8595ba8e4dbe092e67a04f23a6fe3 state=OFFLINE, ts=1470869198614, server=null} to {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_OPEN, ts=1470869198615, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:38,616 INFO [ProcedureExecutor-1] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:46:38,616 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:38,618 INFO [PriorityRpcServer.handler=4,queue=0,port=56228] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:46:38,622 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(6339): Opening region: {ENCODED => eca8595ba8e4dbe092e67a04f23a6fe3, NAME => 'ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:46:38,623 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore eca8595ba8e4dbe092e67a04f23a6fe3 2016-08-10 15:46:38,623 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:46:38,626 INFO [StoreOpener-eca8595ba8e4dbe092e67a04f23a6fe3-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:38,626 INFO [StoreOpener-eca8595ba8e4dbe092e67a04f23a6fe3-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:46:38,627 DEBUG [StoreOpener-eca8595ba8e4dbe092e67a04f23a6fe3-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/f 2016-08-10 15:46:38,628 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3 2016-08-10 15:46:38,632 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:46:38,632 INFO [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(871): Onlined eca8595ba8e4dbe092e67a04f23a6fe3; next sequenceid=2 2016-08-10 15:46:38,633 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:46:38,634 INFO [PostOpenDeployTasks:eca8595ba8e4dbe092e67a04f23a6fe3] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:46:38,634 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_OPEN, ts=1470869198615, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:46:38,634 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStates(1106): Transition {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_OPEN, ts=1470869198615, server=10.22.16.34,56228,1470869104167} to {eca8595ba8e4dbe092e67a04f23a6fe3 state=OPEN, ts=1470869198634, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:38,635 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:46:38,635 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:38,635 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStates(452): Onlined eca8595ba8e4dbe092e67a04f23a6fe3 on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:38,636 DEBUG [ProcedureExecutor-1] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:46:38,636 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869198636,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-10 15:46:38,636 ERROR [B.defaultRpcServer.handler=2,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:38,636 DEBUG [PostOpenDeployTasks:eca8595ba8e4dbe092e67a04f23a6fe3] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:46:38,637 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:38,637 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:38,638 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-10 15:46:38,870 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-10 15:46:38,970 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:562260000000000 2016-08-10 15:46:38,970 DEBUG [ProcedureExecutor-1] procedure2.ProcedureExecutor(870): Procedure completed in 710msec: CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=FINISHED 2016-08-10 15:46:39,373 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-10 15:46:39,373 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns3:table3_restore completed 2016-08-10 15:46:39,373 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:39,373 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116001b 2016-08-10 15:46:39,376 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:39,377 INFO [main] impl.RestoreClientImpl(292): ns3:test-14708691290512 has been successfully restored to ns3:table3_restore 2016-08-10 15:46:39,377 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56474 because read count=-1. Number of active connections: 11 2016-08-10 15:46:39,377 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-10 15:46:39,377 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869137937 hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/ 2016-08-10 15:46:39,377 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (1181552023) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:39,377 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-1167930928) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:39,377 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56473 because read count=-1. Number of active connections: 11 2016-08-10 15:46:39,377 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-10 15:46:39,379 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513/.backup.manifest 2016-08-10 15:46:39,381 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:39,381 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns4/test-14708691290513/.backup.manifest 2016-08-10 15:46:39,381 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns4:test-14708691290513' to 'ns4:table4_restore' from full backup image hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513 2016-08-10 15:46:39,388 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513/archive/data/ns4/test-14708691290513 does not exists 2016-08-10 15:46:39,388 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns4:test-14708691290513, will only create table 2016-08-10 15:46:39,388 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x22952fd connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:39,391 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x22952fd0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:39,391 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6161b1aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:39,391 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:39,391 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:39,392 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x22952fd-0x15676a15116001c connected 2016-08-10 15:46:39,394 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:39,394 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56479; # active connections: 10 2016-08-10 15:46:39,394 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:39,395 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56479 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:39,395 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns4:table4_restore' 2016-08-10 15:46:39,399 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:39,399 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56480; # active connections: 11 2016-08-10 15:46:39,400 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:39,400 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56480 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:39,401 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.HMaster(1495): Client=tyu//10.22.16.34 create 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-10 15:46:39,506 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:46:39,509 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-10 15:46:39,511 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:table4_restore/write-master:562260000000000 2016-08-10 15:46:39,616 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-10 15:46:39,631 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741907_1083{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 291 2016-08-10 15:46:39,818 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-10 15:46:40,040 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns4/table4_restore/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:40,041 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(6162): creating HRegion ns4:table4_restore HTD == 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns4:table4_restore 2016-08-10 15:46:40,049 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741908_1084{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:40,050 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:46:40,050 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f.: disabling compactions & flushes 2016-08-10 15:46:40,051 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:46:40,051 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:46:40,125 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-10 15:46:40,162 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f."} 2016-08-10 15:46:40,163 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:40,164 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:46:40,273 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:46:40,274 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:46:40,274 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {f159bc2dc00e160a8e40e9cbd5189e8f state=OFFLINE, ts=1470869200273, server=null} to {f159bc2dc00e160a8e40e9cbd5189e8f state=PENDING_OPEN, ts=1470869200274, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:40,274 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:46:40,275 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:40,276 INFO [PriorityRpcServer.handler=3,queue=1,port=56228] regionserver.RSRpcServices(1666): Open ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:46:40,281 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(6339): Opening region: {ENCODED => f159bc2dc00e160a8e40e9cbd5189e8f, NAME => 'ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:46:40,281 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table4_restore f159bc2dc00e160a8e40e9cbd5189e8f 2016-08-10 15:46:40,282 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:46:40,284 INFO [StoreOpener-f159bc2dc00e160a8e40e9cbd5189e8f-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:40,285 INFO [StoreOpener-f159bc2dc00e160a8e40e9cbd5189e8f-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:46:40,285 DEBUG [StoreOpener-f159bc2dc00e160a8e40e9cbd5189e8f-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/table4_restore/f159bc2dc00e160a8e40e9cbd5189e8f/f 2016-08-10 15:46:40,286 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/table4_restore/f159bc2dc00e160a8e40e9cbd5189e8f 2016-08-10 15:46:40,291 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/table4_restore/f159bc2dc00e160a8e40e9cbd5189e8f/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:46:40,291 INFO [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(871): Onlined f159bc2dc00e160a8e40e9cbd5189e8f; next sequenceid=2 2016-08-10 15:46:40,292 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:46:40,293 INFO [PostOpenDeployTasks:f159bc2dc00e160a8e40e9cbd5189e8f] regionserver.HRegionServer(1952): Post open deploy tasks for ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:46:40,293 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {f159bc2dc00e160a8e40e9cbd5189e8f state=PENDING_OPEN, ts=1470869200274, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:46:40,293 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(1106): Transition {f159bc2dc00e160a8e40e9cbd5189e8f state=PENDING_OPEN, ts=1470869200274, server=10.22.16.34,56228,1470869104167} to {f159bc2dc00e160a8e40e9cbd5189e8f state=OPEN, ts=1470869200293, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:40,293 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:46:40,294 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:40,294 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(452): Onlined f159bc2dc00e160a8e40e9cbd5189e8f on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:40,294 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:46:40,295 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869200295,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:table4_restore"} 2016-08-10 15:46:40,295 ERROR [B.defaultRpcServer.handler=4,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:40,298 DEBUG [PostOpenDeployTasks:f159bc2dc00e160a8e40e9cbd5189e8f] regionserver.HRegionServer(1979): Finished post open deploy task for ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:46:40,299 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:40,299 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] handler.OpenRegionHandler(126): Opened ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:40,300 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns4:table4_restore state to ENABLED in META 2016-08-10 15:46:40,624 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:table4_restore/write-master:562260000000000 2016-08-10 15:46:40,624 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 1.1160sec: CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=FINISHED 2016-08-10 15:46:40,628 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-10 15:46:40,629 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns4:table4_restore completed 2016-08-10 15:46:40,629 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:40,629 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116001c 2016-08-10 15:46:40,630 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:40,631 INFO [main] impl.RestoreClientImpl(292): ns4:test-14708691290513 has been successfully restored to ns4:table4_restore 2016-08-10 15:46:40,631 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-1513899092) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:40,631 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56480 because read count=-1. Number of active connections: 11 2016-08-10 15:46:40,631 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56479 because read count=-1. Number of active connections: 11 2016-08-10 15:46:40,631 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-10 15:46:40,632 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869137937 hdfs://localhost:56218/backupUT/backup_1470869137937/ns4/test-14708691290513/ 2016-08-10 15:46:40,632 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-10 15:46:40,631 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (1954399742) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:40,632 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1470869129051, ns2:test-14708691290511, ns3:test-14708691290512, ns4:test-14708691290513] are successful! 2016-08-10 15:46:40,678 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869176664/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:40,681 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869176664 2016-08-10 15:46:40,682 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869176664/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:40,682 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869176664/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:40,685 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869176664 2016-08-10 15:46:40,685 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869176664/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:46:40,686 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869176664/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:40,688 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869176664 2016-08-10 15:46:40,689 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869176664/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:46:40,689 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x249707cc connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:40,694 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x249707cc0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:40,695 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@526fd458, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:40,695 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:40,695 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:40,696 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x249707cc-0x15676a15116001d connected 2016-08-10 15:46:40,697 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:40,697 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56487; # active connections: 10 2016-08-10 15:46:40,698 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:40,698 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56487 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:40,706 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116001d 2016-08-10 15:46:40,706 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:40,707 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-10 15:46:40,707 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1775301741) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:40,707 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56487 because read count=-1. Number of active connections: 10 2016-08-10 15:46:40,708 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:40,711 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:46:40,711 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns1/test-1470869129051/.backup.manifest 2016-08-10 15:46:40,711 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1470869129051' to 'ns1:table1_restore' from full backup image hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051 2016-08-10 15:46:40,720 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7b16767e connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:40,722 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7b16767e0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:40,723 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d6036fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:40,723 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:40,723 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:40,724 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7b16767e-0x15676a15116001e connected 2016-08-10 15:46:40,725 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:40,725 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56491; # active connections: 10 2016-08-10 15:46:40,726 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:40,726 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56491 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:40,727 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns1:table1_restore', preserving region splits 2016-08-10 15:46:40,729 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:40,729 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56492; # active connections: 11 2016-08-10 15:46:40,729 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:40,730 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56492 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:40,730 INFO [main] client.HBaseAdmin$10(780): Started disable of ns1:table1_restore 2016-08-10 15:46:40,734 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.HMaster(1986): Client=tyu//10.22.16.34 disable ns1:table1_restore 2016-08-10 15:46:40,849 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-10 15:46:40,852 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:562260000000001 2016-08-10 15:46:40,854 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-10 15:46:40,958 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-10 15:46:40,978 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-10 15:46:41,068 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869201068,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-10 15:46:41,069 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:41,069 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLING in META 2016-08-10 15:46:41,163 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-10 15:46:41,179 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-10 15:46:41,183 DEBUG [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. (offlining), current state: {3d6498df4d520f901c490789b272c507 state=OPEN, ts=1470869195481, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:41,183 INFO [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {3d6498df4d520f901c490789b272c507 state=OPEN, ts=1470869195481, server=10.22.16.34,56228,1470869104167} to {3d6498df4d520f901c490789b272c507 state=PENDING_CLOSE, ts=1470869201183, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:41,183 INFO [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. with state=PENDING_CLOSE 2016-08-10 15:46:41,184 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:41,187 INFO [PriorityRpcServer.handler=0,queue=0,port=56228] regionserver.RSRpcServices(1314): Close 3d6498df4d520f901c490789b272c507, moving to null 2016-08-10 15:46:41,189 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:41,189 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1419): Closing ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.: disabling compactions & flushes 2016-08-10 15:46:41,189 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:41,191 INFO [StoreCloserThread-ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.-1] regionserver.HStore(839): Closed f 2016-08-10 15:46:41,191 DEBUG [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.16.34,56228,1470869104167 for region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:41,191 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:41,196 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-10 15:46:41,198 INFO [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1552): Closed ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:41,199 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.AssignmentManager(2884): Got transition CLOSED for {3d6498df4d520f901c490789b272c507 state=PENDING_CLOSE, ts=1470869201183, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:46:41,199 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStates(1106): Transition {3d6498df4d520f901c490789b272c507 state=PENDING_CLOSE, ts=1470869201183, server=10.22.16.34,56228,1470869104167} to {3d6498df4d520f901c490789b272c507 state=OFFLINE, ts=1470869201199, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:41,199 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. with state=OFFLINE 2016-08-10 15:46:41,200 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:41,201 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStates(590): Offlined 3d6498df4d520f901c490789b272c507 from 10.22.16.34,56228,1470869104167 2016-08-10 15:46:41,201 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:41,345 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869201345,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-10 15:46:41,347 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:41,348 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLED in META 2016-08-10 15:46:41,348 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(424): Disabled table, ns1:table1_restore, is completed. 2016-08-10 15:46:41,468 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-10 15:46:41,562 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:562260000000001 2016-08-10 15:46:41,562 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 722msec: DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=FINISHED 2016-08-10 15:46:41,971 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-10 15:46:41,972 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns1:table1_restore completed 2016-08-10 15:46:41,974 INFO [main] client.HBaseAdmin$8(615): Started truncating ns1:table1_restore 2016-08-10 15:46:41,979 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.HMaster(1848): Client=tyu//10.22.16.34 truncate ns1:table1_restore 2016-08-10 15:46:42,096 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:46:42,099 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:562260000000002 2016-08-10 15:46:42,101 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(87): waiting for 'ns1:table1_restore' regions in transition 2016-08-10 15:46:42,211 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1470869202210,"tag":[],"qualifier":"","vlen":0}]},"row":"ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507."} 2016-08-10 15:46:42,212 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:42,213 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 3d6498df4d520f901c490789b272c507, NAME => 'ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.', STARTKEY => '', ENDKEY => ''}] 2016-08-10 15:46:42,216 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(408): Removing 'ns1:table1_restore' from region states. 2016-08-10 15:46:42,217 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(412): Marking 'ns1:table1_restore' as deleted. 2016-08-10 15:46:42,218 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1470869202217,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns1:table1_restore"} 2016-08-10 15:46:42,219 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:42,220 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1726): Deleted table ns1:table1_restore state from META 2016-08-10 15:46:42,330 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(340): Archiving region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. from FS 2016-08-10 15:46:42,334 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507 2016-08-10 15:46:42,338 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/recovered.edits] 2016-08-10 15:46:42,346 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/eaacd22f29d843e68c5615b77f9bc831_SeqId_4_, to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/archive/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/eaacd22f29d843e68c5615b77f9bc831_SeqId_4_ 2016-08-10 15:46:42,351 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/recovered.edits/6.seqid, to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/archive/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/recovered.edits/6.seqid 2016-08-10 15:46:42,352 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741900_1076 127.0.0.1:56219 2016-08-10 15:46:42,353 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507 2016-08-10 15:46:42,353 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(344): Table 'ns1:table1_restore' archived! 2016-08-10 15:46:42,354 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741899_1075 127.0.0.1:56219 2016-08-10 15:46:42,404 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741899_1075, blk_1073741900_1076] 2016-08-10 15:46:42,469 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741909_1085{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:42,471 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-10 15:46:42,472 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns1:table1_restore 2016-08-10 15:46:42,480 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741910_1086{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:42,481 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:42,485 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.: disabling compactions & flushes 2016-08-10 15:46:42,485 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:42,485 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:42,595 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507."} 2016-08-10 15:46:42,597 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:42,598 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:46:42,702 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:46:42,703 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:46:42,703 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {3d6498df4d520f901c490789b272c507 state=OFFLINE, ts=1470869202702, server=null} to {3d6498df4d520f901c490789b272c507 state=PENDING_OPEN, ts=1470869202703, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:42,704 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:46:42,704 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:42,706 INFO [PriorityRpcServer.handler=2,queue=0,port=56228] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:42,711 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(6339): Opening region: {ENCODED => 3d6498df4d520f901c490789b272c507, NAME => 'ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:46:42,711 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore 3d6498df4d520f901c490789b272c507 2016-08-10 15:46:42,711 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:42,714 INFO [StoreOpener-3d6498df4d520f901c490789b272c507-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:42,715 INFO [StoreOpener-3d6498df4d520f901c490789b272c507-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:46:42,716 DEBUG [StoreOpener-3d6498df4d520f901c490789b272c507-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f 2016-08-10 15:46:42,716 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507 2016-08-10 15:46:42,721 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:46:42,721 INFO [RS_OPEN_REGION-10.22.16.34:56228-0] regionserver.HRegion(871): Onlined 3d6498df4d520f901c490789b272c507; next sequenceid=2 2016-08-10 15:46:42,721 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:42,722 INFO [PostOpenDeployTasks:3d6498df4d520f901c490789b272c507] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:42,723 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {3d6498df4d520f901c490789b272c507 state=PENDING_OPEN, ts=1470869202703, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:46:42,723 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStates(1106): Transition {3d6498df4d520f901c490789b272c507 state=PENDING_OPEN, ts=1470869202703, server=10.22.16.34,56228,1470869104167} to {3d6498df4d520f901c490789b272c507 state=OPEN, ts=1470869202723, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:46:42,723 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:46:42,723 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:42,724 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.RegionStates(452): Onlined 3d6498df4d520f901c490789b272c507 on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:42,724 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:46:42,724 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869202724,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-10 15:46:42,724 ERROR [B.defaultRpcServer.handler=3,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:46:42,725 DEBUG [PostOpenDeployTasks:3d6498df4d520f901c490789b272c507] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:42,725 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:46:42,725 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-0] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. on 10.22.16.34,56228,1470869104167 2016-08-10 15:46:42,726 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-10 15:46:42,836 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(129): truncate 'ns1:table1_restore' completed 2016-08-10 15:46:42,941 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:562260000000002 2016-08-10 15:46:42,941 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 856msec: TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=FINISHED 2016-08-10 15:46:43,110 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=20 2016-08-10 15:46:43,110 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns1:table1_restore completed 2016-08-10 15:46:43,110 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:43,110 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116001e 2016-08-10 15:46:43,111 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:43,112 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (286244205) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:43,112 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56492 because read count=-1. Number of active connections: 11 2016-08-10 15:46:43,112 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-1301719422) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:43,112 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:56218; local cluster node: hdfs://localhost:56218 2016-08-10 15:46:43,113 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/archive/data/ns1/test-1470869129051 on local cluster, back it up before restore 2016-08-10 15:46:43,112 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56491 because read count=-1. Number of active connections: 11 2016-08-10 15:46:43,128 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741911_1087{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:43,129 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-10 15:46:43,130 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-10 15:46:43,145 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426 2016-08-10 15:46:43,145 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2957ad05 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:43,148 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x2957ad050x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:43,148 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1eeb9b15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:43,149 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:43,149 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:43,149 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x2957ad05-0x15676a15116001f connected 2016-08-10 15:46:43,151 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:43,151 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56499; # active connections: 10 2016-08-10 15:46:43,152 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:43,152 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56499 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:43,159 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-10 15:46:43,166 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:43,166 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56500; # active connections: 11 2016-08-10 15:46:43,168 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:43,168 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56500 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:43,174 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:46:43,178 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 first=row0 last=row99 2016-08-10 15:46:43,181 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507., hostname=10.22.16.34,56228,1470869104167, seqNum=2 for row with hfile group [{[B@5359eae4,hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090}] 2016-08-10 15:46:43,182 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:43,182 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56501; # active connections: 7 2016-08-10 15:46:43,183 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:43,183 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56501 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:43,183 INFO [B.defaultRpcServer.handler=2,queue=0,port=56228] regionserver.HStore(670): Validating hfile at hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 for inclusion in store f region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:46:43,186 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56228] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-10 15:46:43,186 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56228] regionserver.HStore(684): Region bounds: first= last= 2016-08-10 15:46:43,189 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56228] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/69811b16b6c941228eef5672bb18451b_SeqId_4_ 2016-08-10 15:46:43,190 INFO [B.defaultRpcServer.handler=2,queue=0,port=56228] regionserver.HStore(742): Loaded HFile hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 into store 'f' as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/69811b16b6c941228eef5672bb18451b_SeqId_4_ - updating store file list. 2016-08-10 15:46:43,195 INFO [B.defaultRpcServer.handler=2,queue=0,port=56228] regionserver.HStore(777): Loaded HFile hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/69811b16b6c941228eef5672bb18451b_SeqId_4_ into store 'f 2016-08-10 15:46:43,195 INFO [B.defaultRpcServer.handler=2,queue=0,port=56228] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/1af52b0fe0f87b7398a77bf958343426/f/316c589ae70c468088bcdd6144bb4090 into store f (new location: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/f/69811b16b6c941228eef5672bb18451b_SeqId_4_) 2016-08-10 15:46:43,196 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:46:43,197 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:43,197 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116001f 2016-08-10 15:46:43,199 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:43,200 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (1581832268) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:43,200 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56501 because read count=-1. Number of active connections: 7 2016-08-10 15:46:43,200 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56499 because read count=-1. Number of active connections: 11 2016-08-10 15:46:43,200 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (1593275837) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:46:43,200 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56500 because read count=-1. Number of active connections: 11 2016-08-10 15:46:43,200 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (1650566074) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:43,201 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns1:test-1470869129051' to 'ns1:table1_restore' from log dirs: hdfs://localhost:56218/backupUT/backup_1470869176664/WALs 2016-08-10 15:46:43,202 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x514ae5a4 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:43,204 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x514ae5a40x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:43,205 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44296e84, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:43,205 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:43,205 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:43,205 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x514ae5a4-0x15676a151160020 connected 2016-08-10 15:46:43,207 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:43,207 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56503; # active connections: 10 2016-08-10 15:46:43,207 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:43,208 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56503 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:43,213 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:56218/backupUT/backup_1470869176664/WALs from hbase tables ,ns1:test-1470869129051 to tables ,ns1:table1_restore 2016-08-10 15:46:43,213 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns1:test-1470869129051 into ns1:table1_restore 2016-08-10 15:46:43,218 DEBUG [main] mapreduce.WALPlayer(299): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1470869203213 2016-08-10 15:46:43,219 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2ea0c4c1 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:46:43,221 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x2ea0c4c10x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:46:43,222 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77a76885, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:46:43,222 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:46:43,222 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:46:43,223 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x2ea0c4c1-0x15676a151160021 connected 2016-08-10 15:46:43,223 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:46:43,224 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56505; # active connections: 11 2016-08-10 15:46:43,224 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:43,224 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56505 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:43,229 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-10 15:46:43,229 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns1:test-1470869129051 2016-08-10 15:46:43,232 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:46:43,232 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56506; # active connections: 12 2016-08-10 15:46:43,232 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:46:43,232 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56506 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:46:43,236 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-10 15:46:43,236 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_e7268b73-f395-4bf7-b270-ae078bbb8e29 2016-08-10 15:46:43,247 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741912_1088{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:43,251 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-10 15:46:43,456 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-5942884319852816504.jar 2016-08-10 15:46:44,598 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-1050560983337863795.jar 2016-08-10 15:46:44,982 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6648537905062725892.jar 2016-08-10 15:46:45,000 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-8313016131142197175.jar 2016-08-10 15:46:46,156 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-7688526361059758115.jar 2016-08-10 15:46:46,156 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-10 15:46:46,157 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-10 15:46:46,157 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-10 15:46:46,157 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:46:46,157 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-10 15:46:46,158 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-10 15:46:46,368 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4906892582542196142.jar 2016-08-10 15:46:46,369 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4906892582542196142.jar 2016-08-10 15:46:46,705 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-10 15:46:47,530 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4092059178015690735.jar 2016-08-10 15:46:47,531 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4906892582542196142.jar 2016-08-10 15:46:47,531 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4906892582542196142.jar 2016-08-10 15:46:47,532 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4092059178015690735.jar 2016-08-10 15:46:47,532 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:46:47,532 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns1:test-1470869129051 output configured. 2016-08-10 15:46:47,532 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:46:47,532 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160021 2016-08-10 15:46:47,533 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:46:47,534 DEBUG [main] mapreduce.WALPlayer(316): success configuring load incremental job 2016-08-10 15:46:47,534 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56505 because read count=-1. Number of active connections: 12 2016-08-10 15:46:47,534 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-1218894147) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:47,534 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-137361387) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:46:47,534 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56506 because read count=-1. Number of active connections: 12 2016-08-10 15:46:47,534 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:46:47,667 WARN [main] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:46:47,679 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741913_1089{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:47,712 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741914_1090{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 4669607 2016-08-10 15:46:48,127 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741915_1091{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,138 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741916_1092{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:48,156 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741917_1093{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,164 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741918_1094{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,173 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741919_1095{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:48,184 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741920_1096{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:48,192 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741921_1097{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:48,212 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741922_1098{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,221 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741923_1099{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,231 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741924_1100{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:48,241 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741925_1101{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 1795932 2016-08-10 15:46:48,656 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741926_1102{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:48,658 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-10 15:46:48,672 DEBUG [main] mapreduce.WALInputFormat(263): Scanning hdfs://localhost:56218/backupUT/backup_1470869176664/WALs for WAL files 2016-08-10 15:46:48,672 WARN [main] mapreduce.WALInputFormat(286): File hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-10 15:46:48,678 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741927_1103{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,684 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741928_1104{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,699 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741929_1105{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:46:48,903 WARN [ResourceManager Event Processor] capacity.LeafQueue(610): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-10 15:46:48,903 WARN [ResourceManager Event Processor] capacity.LeafQueue(631): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-10 15:46:49,491 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0001_000001 (auth:SIMPLE) 2016-08-10 15:46:53,946 INFO [Socket Reader #1 for port 56308] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0001_000001 (auth:SIMPLE) 2016-08-10 15:46:54,208 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741930_1106{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:46:57,198 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0001_000001 (auth:SIMPLE) 2016-08-10 15:47:00,468 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0001_000001 (auth:SIMPLE) 2016-08-10 15:47:00,489 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(223): Exit code from container container_1470869125521_0001_01_000002 is : 143 2016-08-10 15:47:00,522 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741931_1107{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 13201 2016-08-10 15:47:00,530 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741932_1108{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:00,547 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741933_1109{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:00,564 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741934_1110{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741927_1103 127.0.0.1:56219 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741928_1104 127.0.0.1:56219 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741929_1105 127.0.0.1:56219 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741931_1107 127.0.0.1:56219 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741930_1106 127.0.0.1:56219 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741925_1101 127.0.0.1:56219 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741914_1090 127.0.0.1:56219 2016-08-10 15:47:01,590 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741917_1093 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741923_1099 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741918_1094 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741926_1102 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741922_1098 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741916_1092 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741924_1100 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741919_1095 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741915_1091 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741920_1096 127.0.0.1:56219 2016-08-10 15:47:01,591 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741913_1089 127.0.0.1:56219 2016-08-10 15:47:01,592 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741921_1097 127.0.0.1:56219 2016-08-10 15:47:02,324 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1470869203213 2016-08-10 15:47:02,325 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x71752bb2 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:02,327 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x71752bb20x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:02,328 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b013d1a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:02,328 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:02,328 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:02,329 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x71752bb2-0x15676a151160022 connected 2016-08-10 15:47:02,330 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:02,330 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56554; # active connections: 11 2016-08-10 15:47:02,331 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:02,331 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56554 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:02,336 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-10 15:47:02,338 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:56218/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1470869203213/_SUCCESS 2016-08-10 15:47:02,339 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:02,339 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56555; # active connections: 12 2016-08-10 15:47:02,340 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:02,340 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56555 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:02,341 WARN [main] mapreduce.LoadIncrementalHFiles(350): Bulk load operation did not find any files to load in directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1470869203213. Does it contain files in subdirectories that correspond to column family names? 2016-08-10 15:47:02,341 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:02,342 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160022 2016-08-10 15:47:02,342 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:02,343 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-10 15:47:02,343 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56555 because read count=-1. Number of active connections: 12 2016-08-10 15:47:02,343 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160020 2016-08-10 15:47:02,343 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-567583937) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:02,343 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-1357115468) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:02,343 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56554 because read count=-1. Number of active connections: 12 2016-08-10 15:47:02,344 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:02,344 INFO [main] impl.RestoreClientImpl(292): ns1:test-1470869129051 has been successfully restored to ns1:table1_restore 2016-08-10 15:47:02,344 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-10 15:47:02,344 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869137937 hdfs://localhost:56218/backupUT/backup_1470869137937/ns1/test-1470869129051/ 2016-08-10 15:47:02,344 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869176664 hdfs://localhost:56218/backupUT/backup_1470869176664/ns1/test-1470869129051/ 2016-08-10 15:47:02,344 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56503 because read count=-1. Number of active connections: 10 2016-08-10 15:47:02,344 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (-1302407168) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:02,344 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-10 15:47:02,345 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:47:02,348 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:47:02,348 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns2/test-14708691290511/.backup.manifest 2016-08-10 15:47:02,348 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14708691290511' to 'ns2:table2_restore' from full backup image hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511 2016-08-10 15:47:02,356 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x275400f8 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:02,359 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x275400f80x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:02,360 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2feaa872, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:02,360 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:02,360 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:02,361 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x275400f8-0x15676a151160023 connected 2016-08-10 15:47:02,362 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:02,362 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56559; # active connections: 10 2016-08-10 15:47:02,363 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:02,363 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56559 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:02,364 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns2:table2_restore', preserving region splits 2016-08-10 15:47:02,365 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:02,365 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56560; # active connections: 11 2016-08-10 15:47:02,368 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:02,368 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56560 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:02,368 INFO [main] client.HBaseAdmin$10(780): Started disable of ns2:table2_restore 2016-08-10 15:47:02,369 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.HMaster(1986): Client=tyu//10.22.16.34 disable ns2:table2_restore 2016-08-10 15:47:02,475 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-10 15:47:02,477 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-10 15:47:02,478 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:562260000000001 2016-08-10 15:47:02,581 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-10 15:47:02,693 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869222693,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-10 15:47:02,694 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:02,696 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLING in META 2016-08-10 15:47:02,784 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-10 15:47:02,801 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-10 15:47:02,802 DEBUG [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. (offlining), current state: {2046092792b2b999d6593fd7d2a8f33b state=OPEN, ts=1470869197273, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:02,803 INFO [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {2046092792b2b999d6593fd7d2a8f33b state=OPEN, ts=1470869197273, server=10.22.16.34,56228,1470869104167} to {2046092792b2b999d6593fd7d2a8f33b state=PENDING_CLOSE, ts=1470869222803, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:02,803 INFO [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. with state=PENDING_CLOSE 2016-08-10 15:47:02,803 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:02,805 INFO [PriorityRpcServer.handler=4,queue=0,port=56228] regionserver.RSRpcServices(1314): Close 2046092792b2b999d6593fd7d2a8f33b, moving to null 2016-08-10 15:47:02,805 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:02,805 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.: disabling compactions & flushes 2016-08-10 15:47:02,805 DEBUG [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.16.34,56228,1470869104167 for region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:02,806 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:02,807 INFO [StoreCloserThread-ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:02,807 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:47:02,815 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-10 15:47:02,817 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:02,817 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.AssignmentManager(2884): Got transition CLOSED for {2046092792b2b999d6593fd7d2a8f33b state=PENDING_CLOSE, ts=1470869222803, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:47:02,818 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.RegionStates(1106): Transition {2046092792b2b999d6593fd7d2a8f33b state=PENDING_CLOSE, ts=1470869222803, server=10.22.16.34,56228,1470869104167} to {2046092792b2b999d6593fd7d2a8f33b state=OFFLINE, ts=1470869222818, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:02,818 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. with state=OFFLINE 2016-08-10 15:47:02,818 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:02,819 INFO [B.defaultRpcServer.handler=1,queue=0,port=56226] master.RegionStates(590): Offlined 2046092792b2b999d6593fd7d2a8f33b from 10.22.16.34,56228,1470869104167 2016-08-10 15:47:02,820 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:02,960 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869222959,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-10 15:47:02,961 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:02,962 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLED in META 2016-08-10 15:47:02,962 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(424): Disabled table, ns2:table2_restore, is completed. 2016-08-10 15:47:03,091 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-10 15:47:03,181 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:562260000000001 2016-08-10 15:47:03,182 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 702msec: DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=FINISHED 2016-08-10 15:47:03,416 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741920_1096, blk_1073741921_1097, blk_1073741922_1098, blk_1073741923_1099, blk_1073741924_1100, blk_1073741925_1101, blk_1073741926_1102, blk_1073741927_1103, blk_1073741928_1104, blk_1073741929_1105, blk_1073741930_1106, blk_1073741931_1107, blk_1073741913_1089, blk_1073741914_1090, blk_1073741915_1091, blk_1073741916_1092, blk_1073741917_1093, blk_1073741918_1094, blk_1073741919_1095] 2016-08-10 15:47:03,598 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-10 15:47:03,599 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns2:table2_restore completed 2016-08-10 15:47:03,600 INFO [main] client.HBaseAdmin$8(615): Started truncating ns2:table2_restore 2016-08-10 15:47:03,601 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.HMaster(1848): Client=tyu//10.22.16.34 truncate ns2:table2_restore 2016-08-10 15:47:03,704 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:47:03,708 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:562260000000002 2016-08-10 15:47:03,709 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(87): waiting for 'ns2:table2_restore' regions in transition 2016-08-10 15:47:03,819 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1470869223819,"tag":[],"qualifier":"","vlen":0}]},"row":"ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b."} 2016-08-10 15:47:03,821 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:03,822 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 2046092792b2b999d6593fd7d2a8f33b, NAME => 'ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.', STARTKEY => '', ENDKEY => ''}] 2016-08-10 15:47:03,824 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(408): Removing 'ns2:table2_restore' from region states. 2016-08-10 15:47:03,824 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(412): Marking 'ns2:table2_restore' as deleted. 2016-08-10 15:47:03,825 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1470869223824,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns2:table2_restore"} 2016-08-10 15:47:03,825 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:03,826 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1726): Deleted table ns2:table2_restore state from META 2016-08-10 15:47:03,939 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(340): Archiving region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. from FS 2016-08-10 15:47:03,940 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b 2016-08-10 15:47:03,942 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/recovered.edits] 2016-08-10 15:47:03,949 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/c8dffbf1862546e0bdc352b959d501ee_SeqId_4_, to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/archive/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/c8dffbf1862546e0bdc352b959d501ee_SeqId_4_ 2016-08-10 15:47:03,955 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/recovered.edits/6.seqid, to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/archive/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/recovered.edits/6.seqid 2016-08-10 15:47:03,955 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741903_1079 127.0.0.1:56219 2016-08-10 15:47:03,956 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b 2016-08-10 15:47:03,956 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(344): Table 'ns2:table2_restore' archived! 2016-08-10 15:47:03,957 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741902_1078 127.0.0.1:56219 2016-08-10 15:47:04,077 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741935_1111{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 290 2016-08-10 15:47:04,485 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-10 15:47:04,486 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns2:table2_restore 2016-08-10 15:47:04,495 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741936_1112{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:04,496 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:04,496 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.: disabling compactions & flushes 2016-08-10 15:47:04,496 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:04,496 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:04,605 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b."} 2016-08-10 15:47:04,606 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:04,607 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:47:04,715 INFO [ProcedureExecutor-5] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:47:04,716 ERROR [ProcedureExecutor-5] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:47:04,716 INFO [ProcedureExecutor-5] master.RegionStates(1106): Transition {2046092792b2b999d6593fd7d2a8f33b state=OFFLINE, ts=1470869224714, server=null} to {2046092792b2b999d6593fd7d2a8f33b state=PENDING_OPEN, ts=1470869224716, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:04,716 INFO [ProcedureExecutor-5] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:47:04,717 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:04,718 INFO [PriorityRpcServer.handler=1,queue=1,port=56228] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:04,723 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(6339): Opening region: {ENCODED => 2046092792b2b999d6593fd7d2a8f33b, NAME => 'ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:47:04,724 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore 2046092792b2b999d6593fd7d2a8f33b 2016-08-10 15:47:04,724 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:04,727 INFO [StoreOpener-2046092792b2b999d6593fd7d2a8f33b-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:47:04,727 INFO [StoreOpener-2046092792b2b999d6593fd7d2a8f33b-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:47:04,728 DEBUG [StoreOpener-2046092792b2b999d6593fd7d2a8f33b-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f 2016-08-10 15:47:04,728 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b 2016-08-10 15:47:04,732 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:47:04,733 INFO [RS_OPEN_REGION-10.22.16.34:56228-1] regionserver.HRegion(871): Onlined 2046092792b2b999d6593fd7d2a8f33b; next sequenceid=2 2016-08-10 15:47:04,733 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:47:04,734 INFO [PostOpenDeployTasks:2046092792b2b999d6593fd7d2a8f33b] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:04,734 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {2046092792b2b999d6593fd7d2a8f33b state=PENDING_OPEN, ts=1470869224716, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:47:04,734 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStates(1106): Transition {2046092792b2b999d6593fd7d2a8f33b state=PENDING_OPEN, ts=1470869224716, server=10.22.16.34,56228,1470869104167} to {2046092792b2b999d6593fd7d2a8f33b state=OPEN, ts=1470869224734, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:04,735 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:47:04,735 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:04,736 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] master.RegionStates(452): Onlined 2046092792b2b999d6593fd7d2a8f33b on 10.22.16.34,56228,1470869104167 2016-08-10 15:47:04,736 DEBUG [ProcedureExecutor-5] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:47:04,736 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869224736,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-10 15:47:04,736 ERROR [B.defaultRpcServer.handler=2,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:47:04,737 DEBUG [PostOpenDeployTasks:2046092792b2b999d6593fd7d2a8f33b] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:04,737 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:04,737 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-1] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. on 10.22.16.34,56228,1470869104167 2016-08-10 15:47:04,738 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-10 15:47:04,847 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(129): truncate 'ns2:table2_restore' completed 2016-08-10 15:47:04,952 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:562260000000002 2016-08-10 15:47:04,952 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 1.2460sec: TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=FINISHED 2016-08-10 15:47:04,966 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=22 2016-08-10 15:47:04,967 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns2:table2_restore completed 2016-08-10 15:47:04,967 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:04,967 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160023 2016-08-10 15:47:04,968 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:04,969 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:56218; local cluster node: hdfs://localhost:56218 2016-08-10 15:47:04,969 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/archive/data/ns2/test-14708691290511 on local cluster, back it up before restore 2016-08-10 15:47:04,969 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56559 because read count=-1. Number of active connections: 11 2016-08-10 15:47:04,969 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56560 because read count=-1. Number of active connections: 11 2016-08-10 15:47:04,969 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (1615051968) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:04,969 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (412015350) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:04,985 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741937_1113{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:04,986 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-10 15:47:04,986 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-10 15:47:05,002 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357 2016-08-10 15:47:05,002 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x10af7e1c connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:05,005 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x10af7e1c0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:05,006 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2049e713, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:05,006 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:05,006 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:05,006 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x10af7e1c-0x15676a151160024 connected 2016-08-10 15:47:05,007 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:05,007 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56566; # active connections: 10 2016-08-10 15:47:05,008 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,008 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56566 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,013 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-10 15:47:05,019 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:05,019 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56567; # active connections: 11 2016-08-10 15:47:05,019 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,020 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56567 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,024 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:47:05,028 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 first=row0 last=row99 2016-08-10 15:47:05,031 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b., hostname=10.22.16.34,56228,1470869104167, seqNum=2 for row with hfile group [{[B@208bdb65,hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56}] 2016-08-10 15:47:05,032 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:05,032 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56568; # active connections: 7 2016-08-10 15:47:05,033 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,033 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56568 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,033 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(670): Validating hfile at hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 for inclusion in store f region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:05,036 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-10 15:47:05,036 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(684): Region bounds: first= last= 2016-08-10 15:47:05,038 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/7d8f4a65462a4652a01a17b26ebad6f0_SeqId_4_ 2016-08-10 15:47:05,039 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(742): Loaded HFile hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 into store 'f' as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/7d8f4a65462a4652a01a17b26ebad6f0_SeqId_4_ - updating store file list. 2016-08-10 15:47:05,044 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(777): Loaded HFile hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/7d8f4a65462a4652a01a17b26ebad6f0_SeqId_4_ into store 'f 2016-08-10 15:47:05,044 INFO [B.defaultRpcServer.handler=3,queue=0,port=56228] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:56218/user/tyu/hbase-staging/restore/a06bab69e6ee6a1a194d4fd364f48357/f/0d7711c716f649a68e90fec66516fa56 into store f (new location: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/f/7d8f4a65462a4652a01a17b26ebad6f0_SeqId_4_) 2016-08-10 15:47:05,044 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:47:05,045 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:05,045 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160024 2016-08-10 15:47:05,048 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:05,049 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns2:test-14708691290511' to 'ns2:table2_restore' from log dirs: hdfs://localhost:56218/backupUT/backup_1470869176664/WALs 2016-08-10 15:47:05,049 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56567 because read count=-1. Number of active connections: 11 2016-08-10 15:47:05,049 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56568 because read count=-1. Number of active connections: 7 2016-08-10 15:47:05,049 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (864358093) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:05,049 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-1350545400) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:47:05,049 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56566 because read count=-1. Number of active connections: 11 2016-08-10 15:47:05,049 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (776509993) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:05,049 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6da311fb connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:05,051 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x6da311fb0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:05,052 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e9a6bd5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:05,052 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:05,052 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:05,053 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x6da311fb-0x15676a151160025 connected 2016-08-10 15:47:05,054 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:05,054 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56570; # active connections: 10 2016-08-10 15:47:05,054 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,054 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56570 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,055 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:56218/backupUT/backup_1470869176664/WALs from hbase tables ,ns2:test-14708691290511 to tables ,ns2:table2_restore 2016-08-10 15:47:05,055 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns2:test-14708691290511 into ns2:table2_restore 2016-08-10 15:47:05,057 DEBUG [main] mapreduce.WALPlayer(299): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1470869225056 2016-08-10 15:47:05,057 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x52308a32 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:05,059 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x52308a320x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:05,059 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d9cca79, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:05,060 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:05,060 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:05,060 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x52308a32-0x15676a151160026 connected 2016-08-10 15:47:05,061 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:05,061 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56572; # active connections: 11 2016-08-10 15:47:05,061 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,061 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56572 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,062 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-10 15:47:05,063 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns2:test-14708691290511 2016-08-10 15:47:05,065 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:05,065 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56573; # active connections: 12 2016-08-10 15:47:05,065 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,065 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56573 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,068 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-10 15:47:05,068 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_2d51a178-c8a2-4df5-bf6d-5594688efeaf 2016-08-10 15:47:05,074 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741938_1114{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 153 2016-08-10 15:47:05,478 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-10 15:47:05,607 DEBUG [10.22.16.34,56228,1470869104167_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:47:05,677 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6934113532490573618.jar 2016-08-10 15:47:05,892 INFO [10.22.16.34,56226,1470869103454_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x57e44367 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:05,897 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x57e443670x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:05,897 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61d8b80f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:05,898 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:05,898 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:05,898 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x57e44367-0x15676a151160027 connected 2016-08-10 15:47:05,898 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-10 15:47:05,901 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:05,901 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56576; # active connections: 13 2016-08-10 15:47:05,902 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,902 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56576 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,905 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:05,905 DEBUG [RpcServer.listener,port=56228] ipc.RpcServer$Listener(880): RpcServer.listener,port=56228: connection from 10.22.16.34:56577; # active connections: 7 2016-08-10 15:47:05,906 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:05,906 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56577 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:05,908 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:47:05,910 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869107339 2016-08-10 15:47:05,910 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:47:05,910 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869138221 2016-08-10 15:47:05,911 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:47:05,911 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869107985 2016-08-10 15:47:05,911 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:47:05,912 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869138221 2016-08-10 15:47:05,912 INFO [10.22.16.34,56226,1470869103454_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160027 2016-08-10 15:47:05,913 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:05,913 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56576 because read count=-1. Number of active connections: 13 2016-08-10 15:47:05,913 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56577 because read count=-1. Number of active connections: 7 2016-08-10 15:47:05,913 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (1902902994) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:05,913 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (1496944123) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:47:05,929 DEBUG [10.22.16.34,56226,1470869103454_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:47:06,418 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741902_1078, blk_1073741903_1079] 2016-08-10 15:47:06,840 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-1877467513373124603.jar 2016-08-10 15:47:07,224 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6886984849599797127.jar 2016-08-10 15:47:07,244 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-2394940729294543761.jar 2016-08-10 15:47:07,496 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0001_000001 (auth:SIMPLE) 2016-08-10 15:47:08,323 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-10 15:47:08,414 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-8650587504827940106.jar 2016-08-10 15:47:08,414 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-10 15:47:08,415 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-10 15:47:08,415 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-10 15:47:08,415 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:47:08,415 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-10 15:47:08,416 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-10 15:47:08,618 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6724885253429830626.jar 2016-08-10 15:47:08,619 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6724885253429830626.jar 2016-08-10 15:47:09,785 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-9098091337909235386.jar 2016-08-10 15:47:09,786 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6724885253429830626.jar 2016-08-10 15:47:09,786 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-6724885253429830626.jar 2016-08-10 15:47:09,786 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-9098091337909235386.jar 2016-08-10 15:47:09,787 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:47:09,787 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns2:test-14708691290511 output configured. 2016-08-10 15:47:09,787 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:09,787 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160026 2016-08-10 15:47:09,788 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:09,789 DEBUG [main] mapreduce.WALPlayer(316): success configuring load incremental job 2016-08-10 15:47:09,789 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-132213803) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:09,789 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56572 because read count=-1. Number of active connections: 12 2016-08-10 15:47:09,789 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56573 because read count=-1. Number of active connections: 12 2016-08-10 15:47:09,789 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-1254283279) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:09,789 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:47:09,809 WARN [main] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:47:09,820 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741939_1115{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:09,839 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741940_1116{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:09,846 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741941_1117{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:09,854 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741942_1118{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:09,871 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741943_1119{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:09,878 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741944_1120{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:09,888 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741945_1121{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:09,897 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741946_1122{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 662658 2016-08-10 15:47:10,311 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741947_1123{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 1475955 2016-08-10 15:47:10,689 DEBUG [10.22.16.34,56262,1470869110526_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:47:10,731 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741948_1124{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:10,750 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741949_1125{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:10,760 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741950_1126{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:10,769 INFO [IPC Server handler 4 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741951_1127{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:10,779 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741952_1128{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:10,780 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-10 15:47:10,792 DEBUG [main] mapreduce.WALInputFormat(263): Scanning hdfs://localhost:56218/backupUT/backup_1470869176664/WALs for WAL files 2016-08-10 15:47:10,793 WARN [main] mapreduce.WALInputFormat(286): File hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-10 15:47:10,799 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741953_1129{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:10,806 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741954_1130{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:10,819 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741955_1131{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:10,845 WARN [ResourceManager Event Processor] capacity.LeafQueue(610): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-10 15:47:10,846 WARN [ResourceManager Event Processor] capacity.LeafQueue(631): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-10 15:47:10,990 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/info 2016-08-10 15:47:10,990 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/meta 2016-08-10 15:47:10,990 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/info 2016-08-10 15:47:10,990 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/table 2016-08-10 15:47:10,990 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/session 2016-08-10 15:47:11,517 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0002_000001 (auth:SIMPLE) 2016-08-10 15:47:11,690 DEBUG [10.22.16.34,56266,1470869110579_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-10 15:47:15,689 INFO [Socket Reader #1 for port 56308] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0002_000001 (auth:SIMPLE) 2016-08-10 15:47:15,935 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741956_1132{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:18,912 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0002_000001 (auth:SIMPLE) 2016-08-10 15:47:21,349 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0002_000001 (auth:SIMPLE) 2016-08-10 15:47:21,362 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(223): Exit code from container container_1470869125521_0002_01_000002 is : 143 2016-08-10 15:47:21,394 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741957_1133{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 13201 2016-08-10 15:47:21,403 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741958_1134{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:21,422 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741959_1135{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:21,439 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741960_1136{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:22,464 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741953_1129 127.0.0.1:56219 2016-08-10 15:47:22,464 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741954_1130 127.0.0.1:56219 2016-08-10 15:47:22,465 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741955_1131 127.0.0.1:56219 2016-08-10 15:47:22,465 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741957_1133 127.0.0.1:56219 2016-08-10 15:47:22,465 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741956_1132 127.0.0.1:56219 2016-08-10 15:47:22,465 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741952_1128 127.0.0.1:56219 2016-08-10 15:47:22,465 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741943_1119 127.0.0.1:56219 2016-08-10 15:47:22,465 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741941_1117 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741946_1122 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741945_1121 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741939_1115 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741949_1125 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741940_1116 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741951_1127 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741947_1123 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741944_1120 127.0.0.1:56219 2016-08-10 15:47:22,466 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741948_1124 127.0.0.1:56219 2016-08-10 15:47:22,467 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741942_1118 127.0.0.1:56219 2016-08-10 15:47:22,467 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741950_1126 127.0.0.1:56219 2016-08-10 15:47:23,031 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1470869225056 2016-08-10 15:47:23,032 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1a33f8a5 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:23,034 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1a33f8a50x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:23,035 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46b64242, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:23,035 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:23,035 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:23,036 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1a33f8a5-0x15676a151160028 connected 2016-08-10 15:47:23,037 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:23,037 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56624; # active connections: 11 2016-08-10 15:47:23,038 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:23,038 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56624 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:23,050 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-10 15:47:23,053 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:56218/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1470869225056/_SUCCESS 2016-08-10 15:47:23,054 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:23,054 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56626; # active connections: 12 2016-08-10 15:47:23,054 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:23,054 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56626 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:23,056 WARN [main] mapreduce.LoadIncrementalHFiles(350): Bulk load operation did not find any files to load in directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1470869225056. Does it contain files in subdirectories that correspond to column family names? 2016-08-10 15:47:23,056 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:23,056 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160028 2016-08-10 15:47:23,056 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:23,057 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-10 15:47:23,057 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160025 2016-08-10 15:47:23,057 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56626 because read count=-1. Number of active connections: 12 2016-08-10 15:47:23,057 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56624 because read count=-1. Number of active connections: 12 2016-08-10 15:47:23,057 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (1579971248) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:23,057 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (-455917205) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:23,058 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:23,058 INFO [main] impl.RestoreClientImpl(292): ns2:test-14708691290511 has been successfully restored to ns2:table2_restore 2016-08-10 15:47:23,058 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1534245061) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:23,058 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56570 because read count=-1. Number of active connections: 10 2016-08-10 15:47:23,058 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-10 15:47:23,058 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869137937 hdfs://localhost:56218/backupUT/backup_1470869137937/ns2/test-14708691290511/ 2016-08-10 15:47:23,058 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869176664 hdfs://localhost:56218/backupUT/backup_1470869176664/ns2/test-14708691290511/ 2016-08-10 15:47:23,058 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-10 15:47:23,059 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:47:23,062 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1470869137937 2016-08-10 15:47:23,062 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1470869137937/ns3/test-14708691290512/.backup.manifest 2016-08-10 15:47:23,062 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14708691290512' to 'ns3:table3_restore' from full backup image hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512 2016-08-10 15:47:23,067 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/archive/data/ns3/test-14708691290512 does not exists 2016-08-10 15:47:23,067 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14708691290512, will only create table 2016-08-10 15:47:23,068 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2e0f406a connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:23,070 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x2e0f406a0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:23,070 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c8d3adb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:23,071 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:23,071 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:23,071 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x2e0f406a-0x15676a151160029 connected 2016-08-10 15:47:23,072 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:23,072 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56630; # active connections: 10 2016-08-10 15:47:23,073 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:23,073 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56630 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:23,073 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns3:table3_restore', preserving region splits 2016-08-10 15:47:23,074 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:23,074 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56631; # active connections: 11 2016-08-10 15:47:23,075 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:23,075 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56631 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:23,075 INFO [main] client.HBaseAdmin$10(780): Started disable of ns3:table3_restore 2016-08-10 15:47:23,075 INFO [B.defaultRpcServer.handler=2,queue=0,port=56226] master.HMaster(1986): Client=tyu//10.22.16.34 disable ns3:table3_restore 2016-08-10 15:47:23,180 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-10 15:47:23,183 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-10 15:47:23,184 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:562260000000001 2016-08-10 15:47:23,286 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-10 15:47:23,394 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869243394,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-10 15:47:23,395 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:23,397 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLING in META 2016-08-10 15:47:23,488 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-10 15:47:23,503 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-10 15:47:23,504 DEBUG [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. (offlining), current state: {eca8595ba8e4dbe092e67a04f23a6fe3 state=OPEN, ts=1470869198634, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:23,504 INFO [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {eca8595ba8e4dbe092e67a04f23a6fe3 state=OPEN, ts=1470869198634, server=10.22.16.34,56228,1470869104167} to {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_CLOSE, ts=1470869243504, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:23,505 INFO [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. with state=PENDING_CLOSE 2016-08-10 15:47:23,505 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:23,506 INFO [PriorityRpcServer.handler=1,queue=1,port=56228] regionserver.RSRpcServices(1314): Close eca8595ba8e4dbe092e67a04f23a6fe3, moving to null 2016-08-10 15:47:23,507 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:23,508 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1419): Closing ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.: disabling compactions & flushes 2016-08-10 15:47:23,508 DEBUG [10.22.16.34,56226,1470869103454-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.16.34,56228,1470869104167 for region ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:23,508 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:23,508 INFO [StoreCloserThread-ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:23,509 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:47:23,518 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-10 15:47:23,519 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1552): Closed ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:23,520 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.AssignmentManager(2884): Got transition CLOSED for {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_CLOSE, ts=1470869243504, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:47:23,520 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(1106): Transition {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_CLOSE, ts=1470869243504, server=10.22.16.34,56228,1470869104167} to {eca8595ba8e4dbe092e67a04f23a6fe3 state=OFFLINE, ts=1470869243520, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:23,521 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. with state=OFFLINE 2016-08-10 15:47:23,521 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:23,522 INFO [B.defaultRpcServer.handler=4,queue=0,port=56226] master.RegionStates(590): Offlined eca8595ba8e4dbe092e67a04f23a6fe3 from 10.22.16.34,56228,1470869104167 2016-08-10 15:47:23,522 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:23,660 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869243660,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-10 15:47:23,661 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:23,662 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLED in META 2016-08-10 15:47:23,662 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(424): Disabled table, ns3:table3_restore, is completed. 2016-08-10 15:47:23,793 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-10 15:47:23,878 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:562260000000001 2016-08-10 15:47:23,878 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 693msec: DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=FINISHED 2016-08-10 15:47:24,298 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-10 15:47:24,299 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns3:table3_restore completed 2016-08-10 15:47:24,300 INFO [main] client.HBaseAdmin$8(615): Started truncating ns3:table3_restore 2016-08-10 15:47:24,301 INFO [B.defaultRpcServer.handler=3,queue=0,port=56226] master.HMaster(1848): Client=tyu//10.22.16.34 truncate ns3:table3_restore 2016-08-10 15:47:24,407 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56226] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-10 15:47:24,410 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:562260000000002 2016-08-10 15:47:24,411 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(87): waiting for 'ns3:table3_restore' regions in transition 2016-08-10 15:47:24,439 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741952_1128, blk_1073741953_1129, blk_1073741954_1130, blk_1073741955_1131, blk_1073741956_1132, blk_1073741957_1133, blk_1073741939_1115, blk_1073741940_1116, blk_1073741941_1117, blk_1073741942_1118, blk_1073741943_1119, blk_1073741944_1120, blk_1073741945_1121, blk_1073741946_1122, blk_1073741947_1123, blk_1073741948_1124, blk_1073741949_1125, blk_1073741950_1126, blk_1073741951_1127] 2016-08-10 15:47:24,515 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1470869244515,"tag":[],"qualifier":"","vlen":0}]},"row":"ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3."} 2016-08-10 15:47:24,516 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:24,518 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => eca8595ba8e4dbe092e67a04f23a6fe3, NAME => 'ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.', STARTKEY => '', ENDKEY => ''}] 2016-08-10 15:47:24,520 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(408): Removing 'ns3:table3_restore' from region states. 2016-08-10 15:47:24,523 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(412): Marking 'ns3:table3_restore' as deleted. 2016-08-10 15:47:24,523 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1470869244523,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns3:table3_restore"} 2016-08-10 15:47:24,524 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:24,524 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1726): Deleted table ns3:table3_restore state from META 2016-08-10 15:47:24,633 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(340): Archiving region ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. from FS 2016-08-10 15:47:24,633 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3 2016-08-10 15:47:24,636 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/recovered.edits] 2016-08-10 15:47:24,643 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/recovered.edits/4.seqid, to hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/archive/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/recovered.edits/4.seqid 2016-08-10 15:47:24,644 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741906_1082 127.0.0.1:56219 2016-08-10 15:47:24,644 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3 2016-08-10 15:47:24,645 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(344): Table 'ns3:table3_restore' archived! 2016-08-10 15:47:24,646 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741905_1081 127.0.0.1:56219 2016-08-10 15:47:24,765 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741961_1137{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:24,767 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-10 15:47:24,768 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/.tmp Table name == ns3:table3_restore 2016-08-10 15:47:24,775 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741962_1138{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:24,776 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:24,777 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.: disabling compactions & flushes 2016-08-10 15:47:24,777 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:24,777 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:24,883 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3."} 2016-08-10 15:47:24,884 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:24,884 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-10 15:47:24,988 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.16.34,56228,1470869104167 2016-08-10 15:47:24,988 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-10 15:47:24,989 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {eca8595ba8e4dbe092e67a04f23a6fe3 state=OFFLINE, ts=1470869244988, server=null} to {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_OPEN, ts=1470869244989, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:24,989 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. with state=PENDING_OPEN, sn=10.22.16.34,56228,1470869104167 2016-08-10 15:47:24,989 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:24,990 INFO [PriorityRpcServer.handler=3,queue=1,port=56228] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:24,994 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(6339): Opening region: {ENCODED => eca8595ba8e4dbe092e67a04f23a6fe3, NAME => 'ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.', STARTKEY => '', ENDKEY => ''} 2016-08-10 15:47:24,995 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore eca8595ba8e4dbe092e67a04f23a6fe3 2016-08-10 15:47:24,995 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:24,997 INFO [StoreOpener-eca8595ba8e4dbe092e67a04f23a6fe3-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-10 15:47:24,998 INFO [StoreOpener-eca8595ba8e4dbe092e67a04f23a6fe3-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-10 15:47:24,998 DEBUG [StoreOpener-eca8595ba8e4dbe092e67a04f23a6fe3-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/f 2016-08-10 15:47:24,999 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3 2016-08-10 15:47:25,003 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-10 15:47:25,003 INFO [RS_OPEN_REGION-10.22.16.34:56228-2] regionserver.HRegion(871): Onlined eca8595ba8e4dbe092e67a04f23a6fe3; next sequenceid=2 2016-08-10 15:47:25,003 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:47:25,004 INFO [PostOpenDeployTasks:eca8595ba8e4dbe092e67a04f23a6fe3] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:25,005 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.AssignmentManager(2884): Got transition OPENED for {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_OPEN, ts=1470869244989, server=10.22.16.34,56228,1470869104167} from 10.22.16.34,56228,1470869104167 2016-08-10 15:47:25,005 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStates(1106): Transition {eca8595ba8e4dbe092e67a04f23a6fe3 state=PENDING_OPEN, ts=1470869244989, server=10.22.16.34,56228,1470869104167} to {eca8595ba8e4dbe092e67a04f23a6fe3 state=OPEN, ts=1470869245005, server=10.22.16.34,56228,1470869104167} 2016-08-10 15:47:25,005 INFO [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. with state=OPEN, openSeqNum=2, server=10.22.16.34,56228,1470869104167 2016-08-10 15:47:25,005 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:25,006 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226] master.RegionStates(452): Onlined eca8595ba8e4dbe092e67a04f23a6fe3 on 10.22.16.34,56228,1470869104167 2016-08-10 15:47:25,006 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.16.34,56228,1470869104167 2016-08-10 15:47:25,006 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1470869245006,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-10 15:47:25,006 ERROR [B.defaultRpcServer.handler=0,queue=0,port=56226] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-10 15:47:25,007 DEBUG [PostOpenDeployTasks:eca8595ba8e4dbe092e67a04f23a6fe3] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:25,007 DEBUG [RS_OPEN_REGION-10.22.16.34:56228-2] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. on 10.22.16.34,56228,1470869104167 2016-08-10 15:47:25,007 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:25,008 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-10 15:47:25,116 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(129): truncate 'ns3:table3_restore' completed 2016-08-10 15:47:25,221 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:562260000000002 2016-08-10 15:47:25,222 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 815msec: TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=FINISHED 2016-08-10 15:47:25,415 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=56226] master.MasterRpcServices(974): Checking to see if procedure is done procId=24 2016-08-10 15:47:25,416 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns3:table3_restore completed 2016-08-10 15:47:25,416 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:25,416 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160029 2016-08-10 15:47:25,417 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:25,418 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns3:test-14708691290512' to 'ns3:table3_restore' from log dirs: hdfs://localhost:56218/backupUT/backup_1470869176664/WALs 2016-08-10 15:47:25,418 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-377610439) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:25,419 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56631 because read count=-1. Number of active connections: 11 2016-08-10 15:47:25,418 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56630 because read count=-1. Number of active connections: 11 2016-08-10 15:47:25,418 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-361182696) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:25,419 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2baaadec connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:25,421 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x2baaadec0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:25,422 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@150ea1d4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:25,422 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:25,422 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:25,423 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x2baaadec-0x15676a15116002a connected 2016-08-10 15:47:25,424 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:25,424 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56648; # active connections: 10 2016-08-10 15:47:25,425 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:25,425 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56648 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:25,426 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:56218/backupUT/backup_1470869176664/WALs from hbase tables ,ns3:test-14708691290512 to tables ,ns3:table3_restore 2016-08-10 15:47:25,426 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns3:test-14708691290512 into ns3:table3_restore 2016-08-10 15:47:25,427 DEBUG [main] mapreduce.WALPlayer(299): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1470869245426 2016-08-10 15:47:25,428 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x16966ef connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:25,430 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x16966ef0x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:25,430 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1dbd7ead, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:25,431 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:25,431 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:25,431 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x16966ef-0x15676a15116002b connected 2016-08-10 15:47:25,432 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:25,432 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56650; # active connections: 11 2016-08-10 15:47:25,433 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:25,433 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56650 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:25,434 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-10 15:47:25,434 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns3:test-14708691290512 2016-08-10 15:47:25,437 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:25,437 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56651; # active connections: 12 2016-08-10 15:47:25,437 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:25,437 INFO [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56651 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:25,440 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-10 15:47:25,441 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_e71f51f0-91b7-46c8-8f75-78498c6c2eb0 2016-08-10 15:47:25,446 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741963_1139{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:25,448 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-10 15:47:25,657 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4672414619223146246.jar 2016-08-10 15:47:26,853 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-7307138568777376987.jar 2016-08-10 15:47:27,240 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-3770335927226676483.jar 2016-08-10 15:47:27,260 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-8804750189579871273.jar 2016-08-10 15:47:27,441 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741905_1081, blk_1073741906_1082] 2016-08-10 15:47:28,375 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0002_000001 (auth:SIMPLE) 2016-08-10 15:47:28,504 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-7417873577143579563.jar 2016-08-10 15:47:28,505 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-10 15:47:28,505 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-10 15:47:28,505 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-10 15:47:28,506 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:47:28,506 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-10 15:47:28,506 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-10 15:47:28,720 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4866863944462485042.jar 2016-08-10 15:47:28,720 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4866863944462485042.jar 2016-08-10 15:47:29,028 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-10 15:47:29,938 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-5096105159941407715.jar 2016-08-10 15:47:29,939 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4866863944462485042.jar 2016-08-10 15:47:29,940 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-4866863944462485042.jar 2016-08-10 15:47:29,940 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/hadoop-5096105159941407715.jar 2016-08-10 15:47:29,940 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.1/hadoop-mapreduce-client-core-2.7.1.jar 2016-08-10 15:47:29,940 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns3:test-14708691290512 output configured. 2016-08-10 15:47:29,941 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:29,941 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116002b 2016-08-10 15:47:29,941 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:29,942 DEBUG [main] mapreduce.WALPlayer(316): success configuring load incremental job 2016-08-10 15:47:29,942 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56651 because read count=-1. Number of active connections: 12 2016-08-10 15:47:29,942 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-243923113) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:29,942 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (2141573626) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:29,942 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56650 because read count=-1. Number of active connections: 12 2016-08-10 15:47:29,942 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-10 15:47:29,963 WARN [main] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-10 15:47:29,976 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741964_1140{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:29,994 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741965_1141{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,002 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741966_1142{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:30,021 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741967_1143{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:30,028 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741968_1144{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,034 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741969_1145{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,045 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741970_1146{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:30,055 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741971_1147{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,073 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741972_1148{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,080 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741973_1149{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:30,087 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741974_1150{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,095 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741975_1151{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,106 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741976_1152{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:30,117 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741977_1153{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:30,119 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-10 15:47:30,132 DEBUG [main] mapreduce.WALInputFormat(263): Scanning hdfs://localhost:56218/backupUT/backup_1470869176664/WALs for WAL files 2016-08-10 15:47:30,133 WARN [main] mapreduce.WALInputFormat(286): File hdfs://localhost:56218/backupUT/backup_1470869176664/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-10 15:47:30,138 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741978_1154{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:30,143 INFO [IPC Server handler 9 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741979_1155{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:30,158 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741980_1156{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 131846 2016-08-10 15:47:30,587 WARN [ResourceManager Event Processor] capacity.LeafQueue(610): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-10 15:47:30,587 WARN [ResourceManager Event Processor] capacity.LeafQueue(631): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-10 15:47:30,741 INFO [Socket Reader #1 for port 56312] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0003_000001 (auth:SIMPLE) 2016-08-10 15:47:35,017 INFO [Socket Reader #1 for port 56308] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0003_000001 (auth:SIMPLE) 2016-08-10 15:47:35,265 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741981_1157{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:38,259 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0003_000001 (auth:SIMPLE) 2016-08-10 15:47:41,650 INFO [Socket Reader #1 for port 56316] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0003_000001 (auth:SIMPLE) 2016-08-10 15:47:41,664 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(223): Exit code from container container_1470869125521_0003_01_000002 is : 143 2016-08-10 15:47:41,695 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741982_1158{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 13201 2016-08-10 15:47:41,703 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741983_1159{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:41,723 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741984_1160{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:41,740 INFO [IPC Server handler 6 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741985_1161{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:42,768 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741978_1154 127.0.0.1:56219 2016-08-10 15:47:42,768 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741979_1155 127.0.0.1:56219 2016-08-10 15:47:42,768 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741980_1156 127.0.0.1:56219 2016-08-10 15:47:42,768 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741982_1158 127.0.0.1:56219 2016-08-10 15:47:42,768 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741981_1157 127.0.0.1:56219 2016-08-10 15:47:42,768 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741977_1153 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741975_1151 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741973_1149 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741964_1140 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741965_1141 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741972_1148 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741967_1143 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741969_1145 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741976_1152 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741970_1146 127.0.0.1:56219 2016-08-10 15:47:42,769 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741968_1144 127.0.0.1:56219 2016-08-10 15:47:42,770 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741971_1147 127.0.0.1:56219 2016-08-10 15:47:42,770 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741966_1142 127.0.0.1:56219 2016-08-10 15:47:42,770 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741974_1150 127.0.0.1:56219 2016-08-10 15:47:43,763 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1470869245426 2016-08-10 15:47:43,763 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x22cb73c1 connecting to ZooKeeper ensemble=localhost:50432 2016-08-10 15:47:43,768 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x22cb73c10x0, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-10 15:47:43,769 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@242a9082, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-10 15:47:43,770 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-10 15:47:43,770 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-10 15:47:43,770 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x22cb73c1-0x15676a15116002c connected 2016-08-10 15:47:43,771 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-10 15:47:43,772 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56722; # active connections: 11 2016-08-10 15:47:43,772 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:43,772 INFO [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56722 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:43,778 DEBUG [main] client.ConnectionImplementation(604): Table ns3:table3_restore should be available 2016-08-10 15:47:43,781 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:56218/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1470869245426/_SUCCESS 2016-08-10 15:47:43,782 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-10 15:47:43,782 DEBUG [RpcServer.listener,port=56226] ipc.RpcServer$Listener(880): RpcServer.listener,port=56226: connection from 10.22.16.34:56724; # active connections: 12 2016-08-10 15:47:43,783 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-10 15:47:43,783 INFO [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Connection(1740): Connection from 10.22.16.34 port: 56724 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Wed Aug 10 15:44:45 PDT 2016" src_checksum: "71cda35d1ca67a82638a648edff43999" version_major: 2 version_minor: 0 2016-08-10 15:47:43,784 WARN [main] mapreduce.LoadIncrementalHFiles(350): Bulk load operation did not find any files to load in directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1470869245426. Does it contain files in subdirectories that correspond to column family names? 2016-08-10 15:47:43,784 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:43,785 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116002c 2016-08-10 15:47:43,785 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:43,786 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-10 15:47:43,786 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116002a 2016-08-10 15:47:43,786 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (1868172786) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:43,786 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56724 because read count=-1. Number of active connections: 12 2016-08-10 15:47:43,786 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56722 because read count=-1. Number of active connections: 12 2016-08-10 15:47:43,786 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-1157925822) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:43,787 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:43,787 INFO [main] impl.RestoreClientImpl(292): ns3:test-14708691290512 has been successfully restored to ns3:table3_restore 2016-08-10 15:47:43,788 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-10 15:47:43,788 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869137937 hdfs://localhost:56218/backupUT/backup_1470869137937/ns3/test-14708691290512/ 2016-08-10 15:47:43,788 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1470869176664 hdfs://localhost:56218/backupUT/backup_1470869176664/ns3/test-14708691290512/ 2016-08-10 15:47:43,788 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-10 15:47:43,788 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1470869129051, ns2:test-14708691290511, ns3:test-14708691290512] are successful! 2016-08-10 15:47:43,788 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (2085134431) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:43,788 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56648 because read count=-1. Number of active connections: 10 2016-08-10 15:47:43,868 INFO [main] hbase.ResourceChecker(172): after: backup.TestIncrementalBackup#TestIncBackupRestore Thread=862 (was 790) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56262-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=4,queue=0,port=56226-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: ApplicationMasterLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56266-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t16 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-58060915-10.22.16.34-1470869099552:blk_1073741881_1057, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1184) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1255) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.16.34:56228-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Client (944601779) connection to localhost/127.0.0.1:56251 from tyu.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:928) org.apache.hadoop.ipc.Client$Connection.run(Client.java:973) Potentially hanging thread: ResponseProcessor for block BP-58060915-10.22.16.34-1470869099552:blk_1073741882_1058 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2278) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: rs(10.22.16.34,56226,1470869103454)-backup-pool29-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.16.34:56228-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t14 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t12 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.16.34,56226,1470869103454)-backup-pool20-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_990569025_1 at /127.0.0.1:56409 [Receiving block BP-58060915-10.22.16.34-1470869099552:blk_1073741882_1058] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t10 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.16.34:56228-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.16.34,56226,1470869103454' subprocedure-pool3-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.16.34,56228,1470869104167' subprocedure-pool2-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-60284837_1 at /127.0.0.1:56725 [Waiting for operation #2] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.16.34,56228,1470869104167)-backup-pool30-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1445567493_1 at /127.0.0.1:56408 [Receiving block BP-58060915-10.22.16.34-1470869099552:blk_1073741881_1057] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56226-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: (10.22.16.34,56226,1470869103454)-proc-coordinator-pool7-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.16.34,56228,1470869104167' subprocedure-pool4-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-58060915-10.22.16.34-1470869099552:blk_1073741881_1057 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2278) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56262-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Client (944601779) connection to 10.22.16.34/10.22.16.34:56317 from tyu java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:928) org.apache.hadoop.ipc.Client$Connection.run(Client.java:973) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-0.1470869176824 block BP-58060915-10.22.16.34-1470869099552:blk_1073741881_1057 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: ApplicationMasterLauncher #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56262-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0xb319bc2-shared-pool33-t217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t13 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/dfscluster_a0561d32-3b2b-4cd9-bf07-980f21f6d1bd/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=0,queue=0,port=56226-SendThread(localhost:50432) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: PacketResponder: BP-58060915-10.22.16.34-1470869099552:blk_1073741882_1058, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1184) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1255) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MASTER_TABLE_OPERATIONS-10.22.16.34:56226-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Client (944601779) connection to /10.22.16.34:56309 from tyu java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:928) org.apache.hadoop.ipc.Client$Connection.run(Client.java:973) Potentially hanging thread: IPC Client (944601779) connection to /10.22.16.34:56695 from tyu java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:928) org.apache.hadoop.ipc.Client$Connection.run(Client.java:973) Potentially hanging thread: DeletionService #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56226-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t11 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: (10.22.16.34,56226,1470869103454)-proc-coordinator-pool8-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=0,queue=0,port=56226-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t15 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 block BP-58060915-10.22.16.34-1470869099552:blk_1073741882_1058 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: Timer for 'JobHistoryServer' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0xb319bc2-shared-pool33-t218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_86328053_1 at /127.0.0.1:56717 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56266-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56228-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: communication thread java.lang.Object.wait(Native Method) org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:750) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #0 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.read1(BufferedReader.java:205) java.io.BufferedReader.read(BufferedReader.java:279) org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:735) org.apache.hadoop.util.Shell.runCommand(Shell.java:531) org.apache.hadoop.util.Shell.run(Shell.java:456) org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) java.util.concurrent.FutureTask.run(FutureTask.java:262) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Thread-3495 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.readLine(BufferedReader.java:317) java.io.BufferedReader.readLine(BufferedReader.java:382) org.apache.hadoop.util.Shell$1.run(Shell.java:510) Potentially hanging thread: rs(10.22.16.34,56228,1470869104167)-backup-pool19-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/dfscluster_a0561d32-3b2b-4cd9-bf07-980f21f6d1bd/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.16.34:56226-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=4,queue=0,port=56226-SendThread(localhost:50432) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: member: '10.22.16.34,56226,1470869103454' subprocedure-pool1-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) - Thread LEAK? -, OpenFileDescriptor=1159 (was 1032) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=10240 (was 10240), SystemLoadAverage=238 (was 207) - SystemLoadAverage LEAK? -, ProcessCount=274 (was 267) - ProcessCount LEAK? -, AvailableMemoryMB=246 (was 431) 2016-08-10 15:47:43,869 WARN [main] hbase.ResourceChecker(135): Thread=862 is superior to 500 2016-08-10 15:47:43,869 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=1159 is superior to 1024 2016-08-10 15:47:43,897 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741901_1077 127.0.0.1:56219 2016-08-10 15:47:43,898 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(1106): BLOCK* addToInvalidates: blk_1073741904_1080 127.0.0.1:56219 2016-08-10 15:47:43,898 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-10 15:47:43,899 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a15116000b 2016-08-10 15:47:43,899 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:43,899 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-10 15:47:43,900 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-25681404) to /10.22.16.34:56262 from tyu: closed 2016-08-10 15:47:43,900 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-10 15:47:43,900 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56262] ipc.RpcServer$Listener(912): RpcServer.listener,port=56262: DISCONNECTING client 10.22.16.34:56283 because read count=-1. Number of active connections: 2 2016-08-10 15:47:43,900 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-10 15:47:43,901 INFO [M:0;10.22.16.34:56262] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-10 15:47:43,901 INFO [SplitLogWorker-10.22.16.34:56262] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-10 15:47:43,901 INFO [SplitLogWorker-10.22.16.34:56262] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.16.34,56262,1470869110526 exiting 2016-08-10 15:47:43,901 INFO [M:0;10.22.16.34:56262] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-10 15:47:43,902 INFO [M:0;10.22.16.34:56262] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-10 15:47:43,902 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-10 15:47:43,902 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-10 15:47:43,902 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-10 15:47:43,902 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-10 15:47:43,902 INFO [M:0;10.22.16.34:56262] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-10 15:47:43,902 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-10 15:47:43,902 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-10 15:47:43,902 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-10 15:47:43,903 INFO [RS:0;10.22.16.34:56266] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-10 15:47:43,903 INFO [SplitLogWorker-10.22.16.34:56266] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-10 15:47:43,903 INFO [SplitLogWorker-10.22.16.34:56266] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.16.34,56266,1470869110579 exiting 2016-08-10 15:47:43,903 INFO [RS:0;10.22.16.34:56266] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-10 15:47:43,903 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-10 15:47:43,903 INFO [RS:0;10.22.16.34:56266] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-10 15:47:43,903 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-10 15:47:43,903 INFO [RS:0;10.22.16.34:56266] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-10 15:47:43,903 INFO [RS:0;10.22.16.34:56266] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-10 15:47:43,904 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1063): stopping server 10.22.16.34,56266,1470869110579 2016-08-10 15:47:43,904 DEBUG [RS:0;10.22.16.34:56266] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-10 15:47:43,904 DEBUG [RS_CLOSE_REGION-10.22.16.34:56266-0] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:47:43,904 INFO [RS:0;10.22.16.34:56266] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160009 2016-08-10 15:47:43,904 DEBUG [RS_CLOSE_REGION-10.22.16.34:56266-0] regionserver.HRegion(1419): Closing hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e.: disabling compactions & flushes 2016-08-10 15:47:43,904 DEBUG [RS_CLOSE_REGION-10.22.16.34:56266-0] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:47:43,905 INFO [StoreCloserThread-hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e.-1] regionserver.HStore(839): Closed meta 2016-08-10 15:47:43,905 INFO [StoreCloserThread-hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e.-1] regionserver.HStore(839): Closed session 2016-08-10 15:47:43,905 DEBUG [RS:0;10.22.16.34:56266] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:43,905 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1292): Waiting on 1 regions to close 2016-08-10 15:47:43,905 DEBUG [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1296): {5a493dba506f3912b964610f82e9b52e=hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e.} 2016-08-10 15:47:43,905 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579/10.22.16.34%2C56266%2C1470869110579.regiongroup-1.1470869113877 2016-08-10 15:47:43,910 DEBUG [RS_CLOSE_REGION-10.22.16.34:56266-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/backup/5a493dba506f3912b964610f82e9b52e/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-10 15:47:43,911 INFO [RS_CLOSE_REGION-10.22.16.34:56266-0] regionserver.HRegion(1552): Closed hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:47:43,911 DEBUG [RS_CLOSE_REGION-10.22.16.34:56266-0] handler.CloseRegionHandler(122): Closed hbase:backup,,1470869113004.5a493dba506f3912b964610f82e9b52e. 2016-08-10 15:47:43,914 INFO [IPC Server handler 9 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|RBW]]} size 465 2016-08-10 15:47:43,914 INFO [M:0;10.22.16.34:56262] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-10 15:47:43,914 INFO [M:0;10.22.16.34:56262] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-10 15:47:43,914 INFO [M:0;10.22.16.34:56262] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-10 15:47:43,915 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1063): stopping server 10.22.16.34,56262,1470869110526 2016-08-10 15:47:43,915 DEBUG [M:0;10.22.16.34:56262] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-10 15:47:43,915 DEBUG [RS_CLOSE_REGION-10.22.16.34:56262-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:47:43,915 INFO [M:0;10.22.16.34:56262] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160008 2016-08-10 15:47:43,915 DEBUG [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.HRegion(1419): Closing hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f.: disabling compactions & flushes 2016-08-10 15:47:43,915 DEBUG [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:47:43,915 INFO [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=344 B 2016-08-10 15:47:43,916 DEBUG [M:0;10.22.16.34:56262] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:43,916 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:47:43,916 INFO [M:0;10.22.16.34:56262] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-10 15:47:43,916 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56266] ipc.RpcServer$Listener(912): RpcServer.listener,port=56266: DISCONNECTING client 10.22.16.34:56288 because read count=-1. Number of active connections: 1 2016-08-10 15:47:43,916 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-1040434001) to /10.22.16.34:56266 from tyu: closed 2016-08-10 15:47:43,916 INFO [M:0;10.22.16.34:56262] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-10 15:47:43,916 INFO [M:0;10.22.16.34:56262] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-10 15:47:43,916 INFO [M:0;10.22.16.34:56262] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-10 15:47:43,917 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-10 15:47:43,917 DEBUG [M:0;10.22.16.34:56262] regionserver.HRegionServer(1296): {1588230740=hbase:meta,,1.1588230740, f9abaaef3dbd3930695d90325cf0be0f=hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f.} 2016-08-10 15:47:43,917 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-10 15:47:43,917 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-10 15:47:43,917 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-10 15:47:43,918 INFO [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=4.02 KB 2016-08-10 15:47:43,918 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:47:43,925 INFO [IPC Server handler 9 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|RBW]]} size 0 2016-08-10 15:47:43,925 INFO [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=6, memsize=344, hasBloomFilter=true, into tmp file hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/.tmp/936be7fe559643bcb99fb0c73f93bc23 2016-08-10 15:47:43,926 INFO [IPC Server handler 4 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|RBW]]} size 6350 2016-08-10 15:47:43,932 DEBUG [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/.tmp/936be7fe559643bcb99fb0c73f93bc23 as hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/info/936be7fe559643bcb99fb0c73f93bc23 2016-08-10 15:47:43,937 INFO [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.HStore(934): Added hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/info/936be7fe559643bcb99fb0c73f93bc23, entries=2, sequenceid=6, filesize=4.8 K 2016-08-10 15:47:43,938 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:47:43,938 INFO [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.HRegion(2545): Finished memstore flush of ~344 B/344, currentsize=0 B/0 for region hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. in 23ms, sequenceid=6, compaction requested=false 2016-08-10 15:47:43,941 INFO [StoreCloserThread-hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f.-1] regionserver.HStore(839): Closed info 2016-08-10 15:47:43,941 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526/10.22.16.34%2C56262%2C1470869110526.regiongroup-0.1470869111461 2016-08-10 15:47:43,945 DEBUG [RS_CLOSE_REGION-10.22.16.34:56262-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/namespace/f9abaaef3dbd3930695d90325cf0be0f/recovered.edits/9.seqid to file, newSeqId=9, maxSeqId=2 2016-08-10 15:47:43,946 INFO [RS_CLOSE_REGION-10.22.16.34:56262-0] regionserver.HRegion(1552): Closed hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:47:43,946 DEBUG [RS_CLOSE_REGION-10.22.16.34:56262-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1470869110964.f9abaaef3dbd3930695d90325cf0be0f. 2016-08-10 15:47:44,000 INFO [master//10.22.16.34:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-10 15:47:44,036 INFO [regionserver//10.22.16.34:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-10 15:47:44,037 INFO [regionserver//10.22.16.34:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.16.34:0.leaseChecker closing leases 2016-08-10 15:47:44,037 INFO [master//10.22.16.34:0.leaseChecker] regionserver.Leases(146): master//10.22.16.34:0.leaseChecker closing leases 2016-08-10 15:47:44,037 INFO [master//10.22.16.34:0.leaseChecker] regionserver.Leases(149): master//10.22.16.34:0.leaseChecker closed leases 2016-08-10 15:47:44,037 INFO [RS_OPEN_META-10.22.16.34:56262-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-10 15:47:44,037 INFO [regionserver//10.22.16.34:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.16.34:0.leaseChecker closed leases 2016-08-10 15:47:44,107 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1091): stopping server 10.22.16.34,56266,1470869110579; all regions closed. 2016-08-10 15:47:44,107 DEBUG [RS:0;10.22.16.34:56266] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579 2016-08-10 15:47:44,115 INFO [IPC Server handler 6 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741835_1011{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|RBW]]} size 91 2016-08-10 15:47:44,333 INFO [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=3.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/.tmp/12feef51286c422ebf3207790901f2f4 2016-08-10 15:47:44,350 INFO [IPC Server handler 5 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|RBW]]} size 0 2016-08-10 15:47:44,351 INFO [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=704, hasBloomFilter=false, into tmp file hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/.tmp/669a854063a94270aaa27c5e8be49a56 2016-08-10 15:47:44,358 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/.tmp/12feef51286c422ebf3207790901f2f4 as hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/info/12feef51286c422ebf3207790901f2f4 2016-08-10 15:47:44,364 INFO [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HStore(934): Added hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/info/12feef51286c422ebf3207790901f2f4, entries=14, sequenceid=15, filesize=6.2 K 2016-08-10 15:47:44,365 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/.tmp/669a854063a94270aaa27c5e8be49a56 as hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/table/669a854063a94270aaa27c5e8be49a56 2016-08-10 15:47:44,371 INFO [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HStore(934): Added hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/table/669a854063a94270aaa27c5e8be49a56, entries=4, sequenceid=15, filesize=4.7 K 2016-08-10 15:47:44,372 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:47:44,372 INFO [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HRegion(2545): Finished memstore flush of ~4.02 KB/4112, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 455ms, sequenceid=15, compaction requested=false 2016-08-10 15:47:44,373 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-10 15:47:44,374 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-10 15:47:44,374 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta/10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0.1470869110742 2016-08-10 15:47:44,379 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56251/user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/data/hbase/meta/1588230740/recovered.edits/18.seqid to file, newSeqId=18, maxSeqId=3 2016-08-10 15:47:44,379 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-10 15:47:44,380 INFO [RS_CLOSE_META-10.22.16.34:56262-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-10 15:47:44,380 DEBUG [RS_CLOSE_META-10.22.16.34:56262-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-10 15:47:44,523 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1091): stopping server 10.22.16.34,56262,1470869110526; all regions closed. 2016-08-10 15:47:44,523 DEBUG [M:0;10.22.16.34:56262] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526.meta 2016-08-10 15:47:44,525 DEBUG [RS:0;10.22.16.34:56266] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:47:44,525 INFO [RS:0;10.22.16.34:56266] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56266%2C1470869110579.regiongroup-0:(num 1470869112737) 2016-08-10 15:47:44,525 DEBUG [RS:0;10.22.16.34:56266] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56266,1470869110579 2016-08-10 15:47:44,529 INFO [IPC Server handler 0 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|RBW]]} size 83 2016-08-10 15:47:44,530 INFO [IPC Server handler 5 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|RBW]]} size 669 2016-08-10 15:47:44,532 DEBUG [M:0;10.22.16.34:56262] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:47:44,532 INFO [M:0;10.22.16.34:56262] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56262%2C1470869110526.meta.regiongroup-0:(num 1470869110742) 2016-08-10 15:47:44,532 DEBUG [M:0;10.22.16.34:56262] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526 2016-08-10 15:47:44,537 INFO [IPC Server handler 7 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6d5b89e5-d721-4d54-a8ae-d1ad9b1a53df:NORMAL:127.0.0.1:56253|RBW]]} size 83 2016-08-10 15:47:44,539 DEBUG [M:0;10.22.16.34:56262] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:47:44,539 INFO [M:0;10.22.16.34:56262] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56262%2C1470869110526.regiongroup-1:(num 1470869111726) 2016-08-10 15:47:44,539 DEBUG [M:0;10.22.16.34:56262] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/WALs/10.22.16.34,56262,1470869110526 2016-08-10 15:47:44,542 INFO [IPC Server handler 6 on 56251] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56253 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-02fd5a39-2a69-4853-b3df-1271a4ddefe4:NORMAL:127.0.0.1:56253|RBW]]} size 83 2016-08-10 15:47:44,544 DEBUG [M:0;10.22.16.34:56262] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:47:44,544 INFO [M:0;10.22.16.34:56262] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56262%2C1470869110526.regiongroup-0:(num 1470869111461) 2016-08-10 15:47:44,544 DEBUG [M:0;10.22.16.34:56262] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:44,545 INFO [M:0;10.22.16.34:56262] regionserver.Leases(146): M:0;10.22.16.34:56262 closing leases 2016-08-10 15:47:44,545 INFO [M:0;10.22.16.34:56262] regionserver.Leases(149): M:0;10.22.16.34:56262 closed leases 2016-08-10 15:47:44,545 INFO [M:0;10.22.16.34:56262] hbase.ChoreService(323): Chore service for: 10.22.16.34,56262,1470869110526 had [[ScheduledChore: Name: 10.22.16.34,56262,1470869110526-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CatalogJanitor-10.22.16.34:56262 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56262,1470869110526-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.16.34,56262,1470869110526 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56262,1470869110526-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56262,1470869110526-MemstoreFlusherChore Period: 1000 Unit: MILLISECONDS], [ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56262,1470869110526-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56262,1470869110526-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-10 15:47:44,547 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.16.34,56262,1470869110526 2016-08-10 15:47:44,547 INFO [M:0;10.22.16.34:56262] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-10 15:47:44,547 INFO [M:0;10.22.16.34:56262] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-10 15:47:44,547 INFO [M:0;10.22.16.34:56262] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.16.34,56266,1470869110579, 10.22.16.34,56262,1470869110526 2016-08-10 15:47:44,644 INFO [10.22.16.34,56262,1470869110526_splitLogManager__ChoreService_1] hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped 2016-08-10 15:47:44,691 INFO [10.22.16.34,56266,1470869110579_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.16.34,56266,1470869110579-MemstoreFlusherChore was stopped 2016-08-10 15:47:44,935 DEBUG [RS:0;10.22.16.34:56266] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/c7881c4f-81da-4071-b4a5-d4058ccc7d57/oldWALs 2016-08-10 15:47:44,935 INFO [RS:0;10.22.16.34:56266] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56266%2C1470869110579.regiongroup-1:(num 1470869113877) 2016-08-10 15:47:44,936 DEBUG [RS:0;10.22.16.34:56266] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:44,936 INFO [RS:0;10.22.16.34:56266] regionserver.Leases(146): RS:0;10.22.16.34:56266 closing leases 2016-08-10 15:47:44,936 INFO [RS:0;10.22.16.34:56266] regionserver.Leases(149): RS:0;10.22.16.34:56266 closed leases 2016-08-10 15:47:44,936 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56262] ipc.RpcServer$Listener(912): RpcServer.listener,port=56262: DISCONNECTING client 10.22.16.34:56272 because read count=-1. Number of active connections: 1 2016-08-10 15:47:44,936 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (792369241) to /10.22.16.34:56262 from tyu.hfs.1: closed 2016-08-10 15:47:44,936 INFO [RS:0;10.22.16.34:56266] hbase.ChoreService(323): Chore service for: 10.22.16.34,56266,1470869110579 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.22.16.34,56266,1470869110579 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-10 15:47:44,937 INFO [RS:0;10.22.16.34:56266] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-10 15:47:44,937 INFO [RS:0;10.22.16.34:56266] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-10 15:47:44,937 INFO [RS:0;10.22.16.34:56266] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-10 15:47:44,938 INFO [RS:0;10.22.16.34:56266] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-10 15:47:44,941 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:47:44,941 INFO [RS:0;10.22.16.34:56266] ipc.RpcServer(2336): Stopping server on 56266 2016-08-10 15:47:44,941 INFO [RpcServer.listener,port=56266] ipc.RpcServer$Listener(816): RpcServer.listener,port=56266: stopping 2016-08-10 15:47:44,942 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-10 15:47:44,942 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-10 15:47:44,943 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:47:44,943 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.16.34,56266,1470869110579 2016-08-10 15:47:44,943 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56266-0x15676a151160007, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-10 15:47:44,943 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.16.34,56266,1470869110579] 2016-08-10 15:47:44,948 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.16.34,56266,1470869110579 expired; onlineServers=1 2016-08-10 15:47:44,948 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-10 15:47:44,948 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1135): stopping server 10.22.16.34,56266,1470869110579; zookeeper connection closed. 2016-08-10 15:47:44,948 INFO [RS:0;10.22.16.34:56266] regionserver.HRegionServer(1138): RS:0;10.22.16.34:56266 exiting 2016-08-10 15:47:44,948 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@31313429] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@31313429 2016-08-10 15:47:44,948 INFO [M:0;10.22.16.34:56262] master.ServerManager(562): ZK shows there is only the master self online, exiting now 2016-08-10 15:47:44,948 DEBUG [M:0;10.22.16.34:56262] master.HMaster(1127): Stopping service threads 2016-08-10 15:47:44,948 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-10 15:47:44,949 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/master 2016-08-10 15:47:44,949 INFO [M:0;10.22.16.34:56262] hbase.ChoreService(323): Chore service for: 10.22.16.34,56262,1470869110526_splitLogManager_ had [] on shutdown 2016-08-10 15:47:44,949 INFO [M:0;10.22.16.34:56262] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-10 15:47:44,949 INFO [M:0;10.22.16.34:56262] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-10 15:47:44,950 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-08-10 15:47:44,950 INFO [M:0;10.22.16.34:56262] ipc.RpcServer(2336): Stopping server on 56262 2016-08-10 15:47:44,950 INFO [RpcServer.listener,port=56262] ipc.RpcServer$Listener(816): RpcServer.listener,port=56262: stopping 2016-08-10 15:47:44,950 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-10 15:47:44,950 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-10 15:47:44,951 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56262-0x15676a151160006, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.16.34,56262,1470869110526 2016-08-10 15:47:44,951 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.16.34,56262,1470869110526] 2016-08-10 15:47:44,952 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1135): stopping server 10.22.16.34,56262,1470869110526; zookeeper connection closed. 2016-08-10 15:47:44,952 INFO [M:0;10.22.16.34:56262] regionserver.HRegionServer(1138): M:0;10.22.16.34:56262 exiting 2016-08-10 15:47:44,952 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-10 15:47:44,960 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-10 15:47:45,066 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/dfscluster_c8a285b4-f1aa-4075-b261-2da854c81454/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/dfscluster_c8a285b4-f1aa-4075-b261-2da854c81454/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:56251] datanode.BPServiceActor(704): BPOfferService for Block pool BP-902157347-10.22.16.34-1470869109814 (Datanode Uuid 8a9680a1-308c-48dd-898f-02613d074ad5) service to localhost/127.0.0.1:56251 interrupted 2016-08-10 15:47:45,066 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/dfscluster_c8a285b4-f1aa-4075-b261-2da854c81454/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f5038c46-f01d-4bd3-8c7c-b9a2e5f47540/dfscluster_c8a285b4-f1aa-4075-b261-2da854c81454/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:56251] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-902157347-10.22.16.34-1470869109814 (Datanode Uuid 8a9680a1-308c-48dd-898f-02613d074ad5) service to localhost/127.0.0.1:56251 2016-08-10 15:47:45,125 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-10 15:47:45,249 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-10 15:47:45,249 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-10 15:47:45,250 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-10 15:47:45,250 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160005 2016-08-10 15:47:45,252 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:45,252 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-10 15:47:45,252 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (1895183647) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:45,252 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-10 15:47:45,253 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-10 15:47:45,252 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56320 because read count=-1. Number of active connections: 9 2016-08-10 15:47:45,252 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-778847904) to /10.22.16.34:56226 from tyu: closed 2016-08-10 15:47:45,252 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56248 because read count=-1. Number of active connections: 9 2016-08-10 15:47:45,253 INFO [M:0;10.22.16.34:56226] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-10 15:47:45,253 DEBUG [RpcServer.reader=0,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56400 because read count=-1. Number of active connections: 6 2016-08-10 15:47:45,253 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-1334855743) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:47:45,254 INFO [M:0;10.22.16.34:56226] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-10 15:47:45,254 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-10 15:47:45,254 INFO [SplitLogWorker-10.22.16.34:56226] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-10 15:47:45,254 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-10 15:47:45,254 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-10 15:47:45,254 INFO [M:0;10.22.16.34:56226] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-10 15:47:45,254 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-10 15:47:45,254 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-10 15:47:45,255 INFO [RS:0;10.22.16.34:56228] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-10 15:47:45,254 INFO [M:0;10.22.16.34:56226] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-10 15:47:45,254 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-10 15:47:45,254 INFO [SplitLogWorker-10.22.16.34:56226] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.16.34,56226,1470869103454 exiting 2016-08-10 15:47:45,255 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-10 15:47:45,255 INFO [SplitLogWorker-10.22.16.34:56228] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-10 15:47:45,255 INFO [RS:0;10.22.16.34:56228] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-10 15:47:45,255 INFO [SplitLogWorker-10.22.16.34:56228] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.16.34,56228,1470869104167 exiting 2016-08-10 15:47:45,255 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-10 15:47:45,255 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-10 15:47:45,255 INFO [RS:0;10.22.16.34:56228] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-10 15:47:45,256 INFO [RS:0;10.22.16.34:56228] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-10 15:47:45,256 INFO [RS:0;10.22.16.34:56228] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-10 15:47:45,256 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] handler.CloseRegionHandler(90): Processing close of ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:47:45,256 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1063): stopping server 10.22.16.34,56228,1470869104167 2016-08-10 15:47:45,256 DEBUG [RS:0;10.22.16.34:56228] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-10 15:47:45,256 INFO [RS:0;10.22.16.34:56228] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160002 2016-08-10 15:47:45,256 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] handler.CloseRegionHandler(90): Processing close of ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:47:45,256 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:47:45,256 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1419): Closing ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1.: disabling compactions & flushes 2016-08-10 15:47:45,256 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1419): Closing ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971.: disabling compactions & flushes 2016-08-10 15:47:45,256 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1446): Updates disabled for region ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:47:45,256 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.: disabling compactions & flushes 2016-08-10 15:47:45,257 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1446): Updates disabled for region ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:47:45,257 DEBUG [RS:0;10.22.16.34:56228] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:45,257 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:47:45,257 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1292): Waiting on 9 regions to close 2016-08-10 15:47:45,257 DEBUG [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1296): {066be6466168f97a0986d6b8bafdb971=ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971., 3d6498df4d520f901c490789b272c507=ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507., 8229c2c41c671b66ea383beee31266e1=ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1., bb117bea47747375164e98ce6287a201=hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201., a06bab69e6ee6a1a194d4fd364f48357=ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357., 2046092792b2b999d6593fd7d2a8f33b=ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b., f159bc2dc00e160a8e40e9cbd5189e8f=ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f., eca8595ba8e4dbe092e67a04f23a6fe3=ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3., 1af52b0fe0f87b7398a77bf958343426=ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426.} 2016-08-10 15:47:45,257 INFO [StoreCloserThread-ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,257 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (1995833493) to /10.22.16.34:56226 from tyu.hfs.0: closed 2016-08-10 15:47:45,257 DEBUG [RpcServer.reader=2,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56348 because read count=-1. Number of active connections: 7 2016-08-10 15:47:45,257 INFO [StoreCloserThread-ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,259 INFO [StoreCloserThread-ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,259 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:47:45,259 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:47:45,259 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:47:45,265 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 465 2016-08-10 15:47:45,266 INFO [M:0;10.22.16.34:56226] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-10 15:47:45,266 INFO [M:0;10.22.16.34:56226] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-10 15:47:45,266 INFO [M:0;10.22.16.34:56226] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-10 15:47:45,266 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1063): stopping server 10.22.16.34,56226,1470869103454 2016-08-10 15:47:45,266 DEBUG [M:0;10.22.16.34:56226] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-10 15:47:45,266 DEBUG [RS_CLOSE_REGION-10.22.16.34:56226-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:47:45,266 INFO [M:0;10.22.16.34:56226] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x15676a151160003 2016-08-10 15:47:45,267 DEBUG [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.HRegion(1419): Closing hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884.: disabling compactions & flushes 2016-08-10 15:47:45,267 DEBUG [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:47:45,267 INFO [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=1016 B 2016-08-10 15:47:45,267 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:45,268 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:47:45,268 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/test-14708691290513/066be6466168f97a0986d6b8bafdb971/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-10 15:47:45,268 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/test-14708691290512/8229c2c41c671b66ea383beee31266e1/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-10 15:47:45,268 INFO [M:0;10.22.16.34:56226] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-10 15:47:45,268 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (1739252710) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:47:45,268 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (2081992562) to /10.22.16.34:56228 from tyu: closed 2016-08-10 15:47:45,268 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56259 because read count=-1. Number of active connections: 5 2016-08-10 15:47:45,268 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/table1_restore/3d6498df4d520f901c490789b272c507/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-10 15:47:45,268 INFO [M:0;10.22.16.34:56226] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-10 15:47:45,268 INFO [M:0;10.22.16.34:56226] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-10 15:47:45,268 INFO [M:0;10.22.16.34:56226] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-10 15:47:45,268 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56228] ipc.RpcServer$Listener(912): RpcServer.listener,port=56228: DISCONNECTING client 10.22.16.34:56347 because read count=-1. Number of active connections: 4 2016-08-10 15:47:45,269 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-10 15:47:45,269 DEBUG [M:0;10.22.16.34:56226] regionserver.HRegionServer(1296): {c6ed9588ab8edcac411fa2b23646f884=hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884., 1588230740=hbase:meta,,1.1588230740} 2016-08-10 15:47:45,269 INFO [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1552): Closed ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:47:45,270 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1552): Closed ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:47:45,270 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-10 15:47:45,270 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] handler.CloseRegionHandler(122): Closed ns3:test-14708691290512,,1470869135294.8229c2c41c671b66ea383beee31266e1. 2016-08-10 15:47:45,270 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:47:45,270 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] handler.CloseRegionHandler(122): Closed ns4:test-14708691290513,,1470869136550.066be6466168f97a0986d6b8bafdb971. 2016-08-10 15:47:45,270 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1470869195004.3d6498df4d520f901c490789b272c507. 2016-08-10 15:47:45,271 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:45,270 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:47:45,270 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-10 15:47:45,271 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1419): Closing hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201.: disabling compactions & flushes 2016-08-10 15:47:45,271 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:47:45,271 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.: disabling compactions & flushes 2016-08-10 15:47:45,271 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] handler.CloseRegionHandler(90): Processing close of ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:47:45,271 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:45,271 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=15.54 KB 2016-08-10 15:47:45,271 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-10 15:47:45,272 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1419): Closing ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357.: disabling compactions & flushes 2016-08-10 15:47:45,272 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=28.55 KB 2016-08-10 15:47:45,272 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1446): Updates disabled for region ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:47:45,272 INFO [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=840 B 2016-08-10 15:47:45,273 DEBUG [sync.2] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:47:45,273 INFO [StoreCloserThread-ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,273 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:45,273 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:47:45,273 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:47:45,278 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/table2_restore/2046092792b2b999d6593fd7d2a8f33b/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-10 15:47:45,280 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:45,280 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1470869196390.2046092792b2b999d6593fd7d2a8f33b. 2016-08-10 15:47:45,280 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(90): Processing close of ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:47:45,280 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f.: disabling compactions & flushes 2016-08-10 15:47:45,280 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:47:45,281 INFO [StoreCloserThread-ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,281 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741986_1162{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:45,281 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:47:45,281 INFO [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=10, memsize=1016, hasBloomFilter=true, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/.tmp/503b50fae47c4472b54e69c621175b70 2016-08-10 15:47:45,283 INFO [IPC Server handler 7 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741987_1163{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:45,284 INFO [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=211, memsize=840, hasBloomFilter=true, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/.tmp/56c09c200cab432fa0a15fd447cf6174 2016-08-10 15:47:45,284 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741988_1164{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:45,285 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=21, memsize=11.8 K, hasBloomFilter=true, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/.tmp/e6dceb8a26674bf7875bd9d3c90a02e9 2016-08-10 15:47:45,289 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns4/table4_restore/f159bc2dc00e160a8e40e9cbd5189e8f/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-10 15:47:45,290 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:47:45,290 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(122): Closed ns4:table4_restore,,1470869199401.f159bc2dc00e160a8e40e9cbd5189e8f. 2016-08-10 15:47:45,290 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:45,290 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741989_1165{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:45,291 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.: disabling compactions & flushes 2016-08-10 15:47:45,291 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:45,291 INFO [StoreCloserThread-ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,291 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-0.1470869176825 2016-08-10 15:47:45,292 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=77, memsize=24.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/.tmp/419fb316d6414935a8e1e649172a9016 2016-08-10 15:47:45,293 DEBUG [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/.tmp/503b50fae47c4472b54e69c621175b70 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/info/503b50fae47c4472b54e69c621175b70 2016-08-10 15:47:45,295 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/.tmp/56c09c200cab432fa0a15fd447cf6174 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/56c09c200cab432fa0a15fd447cf6174 2016-08-10 15:47:45,296 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns3/table3_restore/eca8595ba8e4dbe092e67a04f23a6fe3/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-10 15:47:45,298 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:45,298 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1470869198149.eca8595ba8e4dbe092e67a04f23a6fe3. 2016-08-10 15:47:45,298 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(90): Processing close of ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:47:45,298 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1419): Closing ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426.: disabling compactions & flushes 2016-08-10 15:47:45,298 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1446): Updates disabled for region ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:47:45,298 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=32.65 KB 2016-08-10 15:47:45,299 DEBUG [sync.3] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:47:45,299 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 419fb316d6414935a8e1e649172a9016 2016-08-10 15:47:45,301 INFO [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/info/503b50fae47c4472b54e69c621175b70, entries=6, sequenceid=10, filesize=4.9 K 2016-08-10 15:47:45,302 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:47:45,302 INFO [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.HRegion(2545): Finished memstore flush of ~1016 B/1016, currentsize=0 B/0 for region hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. in 35ms, sequenceid=10, compaction requested=false 2016-08-10 15:47:45,303 INFO [StoreCloserThread-hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884.-1] regionserver.HStore(839): Closed info 2016-08-10 15:47:45,304 INFO [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/f/56c09c200cab432fa0a15fd447cf6174, entries=5, sequenceid=211, filesize=4.9 K 2016-08-10 15:47:45,304 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454/10.22.16.34%2C56226%2C1470869103454.regiongroup-1.1470869108161 2016-08-10 15:47:45,304 INFO [IPC Server handler 8 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741990_1166{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:45,304 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:47:45,305 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=21, memsize=3.7 K, hasBloomFilter=true, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/.tmp/b93ffde70dfa4de3ac2915aa782ebfbe 2016-08-10 15:47:45,305 INFO [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(2545): Finished memstore flush of ~840 B/840, currentsize=0 B/0 for region ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. in 33ms, sequenceid=211, compaction requested=false 2016-08-10 15:47:45,307 INFO [StoreCloserThread-ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,307 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-3.1470869134197 2016-08-10 15:47:45,309 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741991_1167{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|FINALIZED]]} size 0 2016-08-10 15:47:45,310 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=77, memsize=4.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/.tmp/16e41f8415224fffbf413ada5390d3ee 2016-08-10 15:47:45,310 DEBUG [RS_CLOSE_REGION-10.22.16.34:56226-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/namespace/c6ed9588ab8edcac411fa2b23646f884/recovered.edits/13.seqid to file, newSeqId=13, maxSeqId=2 2016-08-10 15:47:45,312 INFO [RS_CLOSE_REGION-10.22.16.34:56226-0] regionserver.HRegion(1552): Closed hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:47:45,312 DEBUG [RS_CLOSE_REGION-10.22.16.34:56226-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1470869107489.c6ed9588ab8edcac411fa2b23646f884. 2016-08-10 15:47:45,312 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns2/test-14708691290511/a06bab69e6ee6a1a194d4fd364f48357/recovered.edits/214.seqid to file, newSeqId=214, maxSeqId=2 2016-08-10 15:47:45,313 INFO [RS_CLOSE_REGION-10.22.16.34:56228-0] regionserver.HRegion(1552): Closed ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:47:45,313 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-0] handler.CloseRegionHandler(122): Closed ns2:test-14708691290511,,1470869133718.a06bab69e6ee6a1a194d4fd364f48357. 2016-08-10 15:47:45,315 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/.tmp/e6dceb8a26674bf7875bd9d3c90a02e9 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/meta/e6dceb8a26674bf7875bd9d3c90a02e9 2016-08-10 15:47:45,318 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/.tmp/419fb316d6414935a8e1e649172a9016 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/info/419fb316d6414935a8e1e649172a9016 2016-08-10 15:47:45,318 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741992_1168{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 0 2016-08-10 15:47:45,319 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=405, memsize=32.6 K, hasBloomFilter=true, into tmp file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/.tmp/6566ee03a9b849c7ab22683011305795 2016-08-10 15:47:45,321 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/meta/e6dceb8a26674bf7875bd9d3c90a02e9, entries=35, sequenceid=21, filesize=10.3 K 2016-08-10 15:47:45,322 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/.tmp/b93ffde70dfa4de3ac2915aa782ebfbe as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/session/b93ffde70dfa4de3ac2915aa782ebfbe 2016-08-10 15:47:45,324 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 419fb316d6414935a8e1e649172a9016 2016-08-10 15:47:45,325 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/info/419fb316d6414935a8e1e649172a9016, entries=100, sequenceid=77, filesize=16.5 K 2016-08-10 15:47:45,325 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/.tmp/6566ee03a9b849c7ab22683011305795 as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/6566ee03a9b849c7ab22683011305795 2016-08-10 15:47:45,326 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/.tmp/16e41f8415224fffbf413ada5390d3ee as hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/table/16e41f8415224fffbf413ada5390d3ee 2016-08-10 15:47:45,327 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/session/b93ffde70dfa4de3ac2915aa782ebfbe, entries=2, sequenceid=21, filesize=6.2 K 2016-08-10 15:47:45,328 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:47:45,328 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(2545): Finished memstore flush of ~15.54 KB/15912, currentsize=0 B/0 for region hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. in 57ms, sequenceid=21, compaction requested=false 2016-08-10 15:47:45,329 INFO [StoreCloserThread-hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201.-1] regionserver.HStore(839): Closed meta 2016-08-10 15:47:45,330 INFO [StoreCloserThread-hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201.-1] regionserver.HStore(839): Closed session 2016-08-10 15:47:45,331 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-1.1470869110496 2016-08-10 15:47:45,332 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/f/6566ee03a9b849c7ab22683011305795, entries=199, sequenceid=405, filesize=12.7 K 2016-08-10 15:47:45,333 DEBUG [sync.4] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:47:45,333 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HStore(934): Added hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/table/16e41f8415224fffbf413ada5390d3ee, entries=24, sequenceid=77, filesize=5.7 K 2016-08-10 15:47:45,333 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(2545): Finished memstore flush of ~32.65 KB/33432, currentsize=0 B/0 for region ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. in 35ms, sequenceid=405, compaction requested=false 2016-08-10 15:47:45,333 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:45,335 INFO [StoreCloserThread-ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426.-1] regionserver.HStore(839): Closed f 2016-08-10 15:47:45,335 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HRegion(2545): Finished memstore flush of ~28.55 KB/29232, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 63ms, sequenceid=77, compaction requested=false 2016-08-10 15:47:45,335 DEBUG [sync.0] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167/10.22.16.34%2C56228%2C1470869104167.regiongroup-2.1470869132540 2016-08-10 15:47:45,338 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-10 15:47:45,339 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-10 15:47:45,339 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/backup/bb117bea47747375164e98ce6287a201/recovered.edits/24.seqid to file, newSeqId=24, maxSeqId=2 2016-08-10 15:47:45,339 DEBUG [sync.1] wal.FSHLog$SyncRunner(1275): syncing writer hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta/10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0.1470869106429 2016-08-10 15:47:45,340 INFO [RS_CLOSE_REGION-10.22.16.34:56228-2] regionserver.HRegion(1552): Closed hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:47:45,340 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-2] handler.CloseRegionHandler(122): Closed hbase:backup,,1470869109793.bb117bea47747375164e98ce6287a201. 2016-08-10 15:47:45,343 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/ns1/test-1470869129051/1af52b0fe0f87b7398a77bf958343426/recovered.edits/408.seqid to file, newSeqId=408, maxSeqId=2 2016-08-10 15:47:45,344 INFO [RS_CLOSE_REGION-10.22.16.34:56228-1] regionserver.HRegion(1552): Closed ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:47:45,344 DEBUG [RS_CLOSE_REGION-10.22.16.34:56228-1] handler.CloseRegionHandler(122): Closed ns1:test-1470869129051,,1470869132051.1af52b0fe0f87b7398a77bf958343426. 2016-08-10 15:47:45,344 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:56218/user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/data/hbase/meta/1588230740/recovered.edits/80.seqid to file, newSeqId=80, maxSeqId=3 2016-08-10 15:47:45,345 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-10 15:47:45,346 INFO [RS_CLOSE_META-10.22.16.34:56226-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-10 15:47:45,346 DEBUG [RS_CLOSE_META-10.22.16.34:56226-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-10 15:47:45,449 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5c64f59] blockmanagement.BlockManager(3488): BLOCK* BlockManager: ask 127.0.0.1:56219 to delete [blk_1073741964_1140, blk_1073741901_1077, blk_1073741965_1141, blk_1073741966_1142, blk_1073741967_1143, blk_1073741904_1080, blk_1073741968_1144, blk_1073741969_1145, blk_1073741970_1146, blk_1073741971_1147, blk_1073741972_1148, blk_1073741973_1149, blk_1073741974_1150, blk_1073741975_1151, blk_1073741976_1152, blk_1073741977_1153, blk_1073741978_1154, blk_1073741979_1155, blk_1073741980_1156, blk_1073741981_1157, blk_1073741982_1158] 2016-08-10 15:47:45,462 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1091): stopping server 10.22.16.34,56228,1470869104167; all regions closed. 2016-08-10 15:47:45,463 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167 2016-08-10 15:47:45,469 INFO [IPC Server handler 1 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741882_1058{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 1511 2016-08-10 15:47:45,470 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1091): stopping server 10.22.16.34,56226,1470869103454; all regions closed. 2016-08-10 15:47:45,470 DEBUG [M:0;10.22.16.34:56226] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454.meta 2016-08-10 15:47:45,474 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 83 2016-08-10 15:47:45,477 DEBUG [M:0;10.22.16.34:56226] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:47:45,477 INFO [M:0;10.22.16.34:56226] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56226%2C1470869103454.meta.regiongroup-0:(num 1470869106429) 2016-08-10 15:47:45,477 DEBUG [M:0;10.22.16.34:56226] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454 2016-08-10 15:47:45,480 INFO [IPC Server handler 3 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741881_1057{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 83 2016-08-10 15:47:45,482 DEBUG [M:0;10.22.16.34:56226] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:47:45,483 INFO [M:0;10.22.16.34:56226] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56226%2C1470869103454.regiongroup-0:(num 1470869176824) 2016-08-10 15:47:45,483 DEBUG [M:0;10.22.16.34:56226] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56226,1470869103454 2016-08-10 15:47:45,485 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 83 2016-08-10 15:47:45,488 DEBUG [M:0;10.22.16.34:56226] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:47:45,488 INFO [M:0;10.22.16.34:56226] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56226%2C1470869103454.regiongroup-1:(num 1470869108161) 2016-08-10 15:47:45,488 DEBUG [M:0;10.22.16.34:56226] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:45,488 INFO [M:0;10.22.16.34:56226] regionserver.Leases(146): M:0;10.22.16.34:56226 closing leases 2016-08-10 15:47:45,488 INFO [M:0;10.22.16.34:56226] regionserver.Leases(149): M:0;10.22.16.34:56226 closed leases 2016-08-10 15:47:45,488 INFO [M:0;10.22.16.34:56226] hbase.ChoreService(323): Chore service for: 10.22.16.34,56226,1470869103454 had [[ScheduledChore: Name: CatalogJanitor-10.22.16.34:56226 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56226,1470869103454-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56226,1470869103454-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56226,1470869103454-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.16.34,56226,1470869103454 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56226,1470869103454-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56226,1470869103454-MemstoreFlusherChore Period: 1000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.16.34,56226,1470869103454-ClusterStatusChore Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-08-10 15:47:45,558 INFO [regionserver//10.22.16.34:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-10 15:47:45,558 INFO [master//10.22.16.34:0.leaseChecker] regionserver.Leases(146): master//10.22.16.34:0.leaseChecker closing leases 2016-08-10 15:47:45,559 INFO [master//10.22.16.34:0.leaseChecker] regionserver.Leases(149): master//10.22.16.34:0.leaseChecker closed leases 2016-08-10 15:47:45,558 INFO [master//10.22.16.34:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-10 15:47:45,581 INFO [regionserver//10.22.16.34:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.16.34:0.leaseChecker closing leases 2016-08-10 15:47:45,581 INFO [regionserver//10.22.16.34:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.16.34:0.leaseChecker closed leases 2016-08-10 15:47:45,727 INFO [RS_OPEN_META-10.22.16.34:56226-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-10 15:47:45,732 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.16.34,56226,1470869103454 2016-08-10 15:47:45,732 INFO [M:0;10.22.16.34:56226] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-10 15:47:45,732 INFO [M:0;10.22.16.34:56226] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-10 15:47:45,732 INFO [M:0;10.22.16.34:56226] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.16.34,56228,1470869104167, 10.22.16.34,56226,1470869103454 2016-08-10 15:47:45,878 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:47:45,879 INFO [RS:0;10.22.16.34:56228] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56228%2C1470869104167.regiongroup-0:(num 1470869176825) 2016-08-10 15:47:45,879 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167 2016-08-10 15:47:45,884 INFO [IPC Server handler 0 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 13273 2016-08-10 15:47:45,934 INFO [10.22.16.34,56228,1470869104167_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.16.34,56228,1470869104167-MemstoreFlusherChore was stopped 2016-08-10 15:47:46,062 INFO [10.22.16.34,56226,1470869103454_splitLogManager__ChoreService_1] hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped 2016-08-10 15:47:46,293 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:47:46,293 INFO [RS:0;10.22.16.34:56228] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56228%2C1470869104167.regiongroup-1:(num 1470869110496) 2016-08-10 15:47:46,293 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167 2016-08-10 15:47:46,298 INFO [IPC Server handler 5 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741843_1019{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6a1ec67f-9d4e-405a-9bd6-8844b069f33a:NORMAL:127.0.0.1:56219|RBW]]} size 47186 2016-08-10 15:47:46,707 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:47:46,707 INFO [RS:0;10.22.16.34:56228] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56228%2C1470869104167.regiongroup-2:(num 1470869132540) 2016-08-10 15:47:46,707 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1086): Closing WAL writer in /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/WALs/10.22.16.34,56228,1470869104167 2016-08-10 15:47:46,712 INFO [IPC Server handler 2 on 56218] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:56219 is added to blk_1073741846_1022{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1306e099-b7c4-4e61-aefd-402a3d189b66:NORMAL:127.0.0.1:56219|RBW]]} size 25686 2016-08-10 15:47:46,774 INFO [M:0;10.22.16.34:56226] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.16.34,56228,1470869104167, 10.22.16.34,56226,1470869103454 2016-08-10 15:47:47,117 DEBUG [RS:0;10.22.16.34:56228] wal.FSHLog(1044): Moved 1 WAL file(s) to /user/tyu/test-data/f650797e-abdc-4669-aac9-39b68914fcf9/oldWALs 2016-08-10 15:47:47,117 INFO [RS:0;10.22.16.34:56228] wal.FSHLog(1047): Closed WAL: FSHLog 10.22.16.34%2C56228%2C1470869104167.regiongroup-3:(num 1470869134197) 2016-08-10 15:47:47,117 DEBUG [RS:0;10.22.16.34:56228] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-10 15:47:47,117 INFO [RS:0;10.22.16.34:56228] regionserver.Leases(146): RS:0;10.22.16.34:56228 closing leases 2016-08-10 15:47:47,117 INFO [RS:0;10.22.16.34:56228] regionserver.Leases(149): RS:0;10.22.16.34:56228 closed leases 2016-08-10 15:47:47,117 DEBUG [RpcServer.reader=1,bindAddress=10.22.16.34,port=56226] ipc.RpcServer$Listener(912): RpcServer.listener,port=56226: DISCONNECTING client 10.22.16.34:56236 because read count=-1. Number of active connections: 6 2016-08-10 15:47:47,117 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-2140771683) to /10.22.16.34:56226 from tyu.hfs.0: closed 2016-08-10 15:47:47,117 INFO [RS:0;10.22.16.34:56228] hbase.ChoreService(323): Chore service for: 10.22.16.34,56228,1470869104167 had [[ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.16.34,56228,1470869104167 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-08-10 15:47:47,118 INFO [RS:0;10.22.16.34:56228] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-10 15:47:47,118 INFO [RS:0;10.22.16.34:56228] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-10 15:47:47,118 INFO [RS:0;10.22.16.34:56228] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-10 15:47:47,118 INFO [RS:0;10.22.16.34:56228] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-10 15:47:47,121 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:47:47,121 INFO [RS:0;10.22.16.34:56228] ipc.RpcServer(2336): Stopping server on 56228 2016-08-10 15:47:47,122 INFO [RpcServer.listener,port=56228] ipc.RpcServer$Listener(816): RpcServer.listener,port=56228: stopping 2016-08-10 15:47:47,122 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-10 15:47:47,123 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-10 15:47:47,124 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:47:47,124 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.16.34,56228,1470869104167 2016-08-10 15:47:47,124 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:56228-0x15676a151160001, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-10 15:47:47,124 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.16.34,56228,1470869104167] 2016-08-10 15:47:47,125 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.16.34,56228,1470869104167 expired; onlineServers=1 2016-08-10 15:47:47,125 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1135): stopping server 10.22.16.34,56228,1470869104167; zookeeper connection closed. 2016-08-10 15:47:47,125 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-10 15:47:47,125 INFO [RS:0;10.22.16.34:56228] regionserver.HRegionServer(1138): RS:0;10.22.16.34:56228 exiting 2016-08-10 15:47:47,125 INFO [M:0;10.22.16.34:56226] master.ServerManager(562): ZK shows there is only the master self online, exiting now 2016-08-10 15:47:47,125 DEBUG [M:0;10.22.16.34:56226] master.HMaster(1127): Stopping service threads 2016-08-10 15:47:47,125 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@56b0eb1c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@56b0eb1c 2016-08-10 15:47:47,126 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-10 15:47:47,126 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/master 2016-08-10 15:47:47,126 INFO [M:0;10.22.16.34:56226] hbase.ChoreService(323): Chore service for: 10.22.16.34,56226,1470869103454_splitLogManager_ had [] on shutdown 2016-08-10 15:47:47,127 INFO [M:0;10.22.16.34:56226] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-10 15:47:47,127 INFO [M:0;10.22.16.34:56226] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-10 15:47:47,127 INFO [M:0;10.22.16.34:56226] ipc.RpcServer(2336): Stopping server on 56226 2016-08-10 15:47:47,127 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-08-10 15:47:47,127 INFO [RpcServer.listener,port=56226] ipc.RpcServer$Listener(816): RpcServer.listener,port=56226: stopping 2016-08-10 15:47:47,127 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-10 15:47:47,127 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-10 15:47:47,128 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:56226-0x15676a151160000, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.16.34,56226,1470869103454 2016-08-10 15:47:47,128 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.16.34,56226,1470869103454] 2016-08-10 15:47:47,129 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1135): stopping server 10.22.16.34,56226,1470869103454; zookeeper connection closed. 2016-08-10 15:47:47,129 INFO [M:0;10.22.16.34:56226] regionserver.HRegionServer(1138): M:0;10.22.16.34:56226 exiting 2016-08-10 15:47:47,132 INFO [main] zookeeper.MiniZooKeeperCluster(319): Shutdown MiniZK cluster with all ZK servers 2016-08-10 15:47:47,132 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-10 15:47:47,137 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-10 15:47:47,230 DEBUG [10.22.16.34:56226.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x15676a151160004, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-10 15:47:47,230 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xb319bc2-0x15676a15116000d, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-10 15:47:47,231 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0xb319bc2-0x15676a15116000d, quorum=localhost:50432, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-10 15:47:47,231 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3479054d-0x15676a15116000e, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-10 15:47:47,231 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x3479054d-0x15676a15116000e, quorum=localhost:50432, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-10 15:47:47,230 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x144a74ce-0x15676a151160010, quorum=localhost:50432, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-10 15:47:47,231 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=56226-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x144a74ce-0x15676a151160010, quorum=localhost:50432, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-10 15:47:47,230 DEBUG [10.22.16.34:56262.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x15676a15116000a, quorum=localhost:50432, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-10 15:47:47,231 DEBUG [10.22.16.34:56262.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x15676a15116000a, quorum=localhost:50432, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-08-10 15:47:47,231 DEBUG [10.22.16.34:56226.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x15676a151160004, quorum=localhost:50432, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-10 15:47:47,247 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/dfscluster_a0561d32-3b2b-4cd9-bf07-980f21f6d1bd/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/dfscluster_a0561d32-3b2b-4cd9-bf07-980f21f6d1bd/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:56218] datanode.BPServiceActor(704): BPOfferService for Block pool BP-58060915-10.22.16.34-1470869099552 (Datanode Uuid df30d679-96d3-4692-b684-a43b060adbff) service to localhost/127.0.0.1:56218 interrupted 2016-08-10 15:47:47,247 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/dfscluster_a0561d32-3b2b-4cd9-bf07-980f21f6d1bd/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/6086d153-631b-4c48-b5a7-03a12dea94ef/dfscluster_a0561d32-3b2b-4cd9-bf07-980f21f6d1bd/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:56218] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-58060915-10.22.16.34-1470869099552 (Datanode Uuid df30d679-96d3-4692-b684-a43b060adbff) service to localhost/127.0.0.1:56218 2016-08-10 15:47:47,317 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-10 15:47:47,458 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-10 15:47:47,458 INFO [main] hbase.HBaseTestingUtility(2498): Stopping mini mapreduce cluster... 2016-08-10 15:47:47,463 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-10 15:47:48,777 INFO [Socket Reader #1 for port 56312] ipc.Server$Connection(1316): Auth successful for appattempt_1470869125521_0003_000001 (auth:SIMPLE) 2016-08-10 15:47:49,417 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-10 15:47:49,432 ERROR [HBase-Metrics2-1] lib.MethodMetric$2(118): Error invoking method getBlocksTotal java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) at org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) at org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:401) at org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:96) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:270) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:240) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:322) at com.sun.proxy.$Proxy14.postStart(Unknown Source) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:194) at org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:78) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.size(BlocksMap.java:203) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getTotalBlocks(BlockManager.java:3375) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlocksTotal(FSNamesystem.java:5730) ... 32 more 2016-08-10 15:47:49,477 ERROR [HBase-Metrics2-1] lib.MethodMetric$2(118): Error invoking method getBlocksTotal java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) at org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) at org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:401) at org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:96) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:270) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:240) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:322) at com.sun.proxy.$Proxy14.postStart(Unknown Source) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:194) at org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:78) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.size(BlocksMap.java:203) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getTotalBlocks(BlockManager.java:3375) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlocksTotal(FSNamesystem.java:5730) ... 32 more 2016-08-10 15:48:01,511 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-10 15:48:15,631 ERROR [Thread[Thread-636,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-10 15:48:15,633 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-10 15:48:15,744 WARN [ApplicationMaster Launcher] amlauncher.ApplicationMasterLauncher$LauncherThread(122): org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning. 2016-08-10 15:48:15,747 ERROR [ResourceManager Event Processor] resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor(675): Returning, interrupted : java.lang.InterruptedException 2016-08-10 15:48:15,748 ERROR [Thread[Thread-467,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-10 15:48:15,751 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-10 15:48:15,753 ERROR [Thread[Thread-446,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-10 15:48:15,753 INFO [main] hbase.HBaseTestingUtility(2501): Mini mapreduce cluster stopped 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1003cac6 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1003cac6 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1003cac6 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1003cac6 2016-08-10 15:48:15,759 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(120): Starting fs shutdown hook thread. 2016-08-10 15:48:15,766 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished.