2016-08-18 10:05:25,385 INFO [main] hbase.HBaseTestingUtility(496): Created new mini-cluster data directory: /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/dfscluster_2d76f3f4-9dc4-4950-aa90-aebb405cacf6, deleteOnExit=true 2016-08-18 10:05:26,390 INFO [main] zookeeper.MiniZooKeeperCluster(276): Started MiniZooKeeperCluster and ran successful 'stat' on client port=49480 2016-08-18 10:05:26,415 INFO [main] hbase.HBaseTestingUtility(1013): Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s) 2016-08-18 10:05:26,415 INFO [main] hbase.HBaseTestingUtility(743): Setting test.cache.data to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/cache_data in system properties and HBase conf 2016-08-18 10:05:26,416 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.tmp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_tmp in system properties and HBase conf 2016-08-18 10:05:26,416 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.log.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_logs in system properties and HBase conf 2016-08-18 10:05:26,416 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.local.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/mapred_local in system properties and HBase conf 2016-08-18 10:05:26,417 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.temp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/mapred_temp in system properties and HBase conf 2016-08-18 10:05:26,417 INFO [main] hbase.HBaseTestingUtility(734): read short circuit is OFF 2016-08-18 10:05:26,520 WARN [main] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2016-08-18 10:05:26,788 DEBUG [main] fs.HFileSystem(221): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2016-08-18 10:05:27,332 WARN [main] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2016-08-18 10:05:27,429 INFO [main] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-08-18 10:05:27,471 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:27,496 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.3/hadoop-hdfs-2.7.3-tests.jar!/webapps/hdfs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_59387_hdfs____in1muw/webapp 2016-08-18 10:05:27,615 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:59387 2016-08-18 10:05:28,071 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:28,074 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.3/hadoop-hdfs-2.7.3-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_59390_datanode____u3xs25/webapp 2016-08-18 10:05:28,146 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:59390 2016-08-18 10:05:28,618 INFO [Block report processor] blockmanagement.BlockManager(1883): BLOCK* processReport: from storage DS-e0680069-b93f-4c56-b218-5416e527e484 node DatanodeRegistration(127.0.0.1:59389, datanodeUuid=f1a1ce0d-aa7a-4774-bdcb-e77714320637, infoPort=59391, infoSecurePort=0, ipcPort=59392, storageInfo=lv=-56;cid=testClusterID;nsid=1867876286;c=0), blocks: 0, hasStaleStorage: true, processing time: 2 msecs 2016-08-18 10:05:28,618 INFO [Block report processor] blockmanagement.BlockManager(1883): BLOCK* processReport: from storage DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e node DatanodeRegistration(127.0.0.1:59389, datanodeUuid=f1a1ce0d-aa7a-4774-bdcb-e77714320637, infoPort=59391, infoSecurePort=0, ipcPort=59392, storageInfo=lv=-56;cid=testClusterID;nsid=1867876286;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-08-18 10:05:28,676 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:28,678 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:28,884 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 7 2016-08-18 10:05:29,296 INFO [main] util.FSUtils(749): Created version file at hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 with version=8 2016-08-18 10:05:29,896 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-18 10:05:31,841 INFO [main] client.ConnectionUtils(106): master//10.22.9.171:0 server-side HConnection retries=350 2016-08-18 10:05:32,018 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-18 10:05:32,065 INFO [main] ipc.RpcServer$Listener(635): master//10.22.9.171:0: started 3 reader(s) listening on port=59396 2016-08-18 10:05:32,257 INFO [main] hfile.CacheConfig(548): Allocating LruBlockCache size=995.60 MB, blockSize=64 KB 2016-08-18 10:05:32,284 DEBUG [main] hfile.CacheConfig(562): Trying to use Internal l2 cache 2016-08-18 10:05:32,285 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:32,286 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:32,307 INFO [main] mob.MobFileCache(121): MobFileCache is initialized, and the cache size is 1000 2016-08-18 10:05:32,309 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:32,459 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=master:59396 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:32,488 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:593960x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:32,490 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): master:59396-0x1569e9d55410000 connected 2016-08-18 10:05:32,588 DEBUG [main] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-08-18 10:05:32,589 DEBUG [main] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-18 10:05:32,592 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-18 10:05:32,592 INFO [RpcServer.listener,port=59396] ipc.RpcServer$Listener(769): RpcServer.listener,port=59396: starting 2016-08-18 10:05:32,593 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-18 10:05:32,594 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-18 10:05:32,594 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-18 10:05:32,594 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-18 10:05:32,594 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-18 10:05:32,595 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-18 10:05:32,595 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-18 10:05:32,595 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-18 10:05:32,595 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-18 10:05:32,595 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-18 10:05:32,596 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-18 10:05:32,596 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-18 10:05:32,596 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-18 10:05:32,641 INFO [main] master.HMaster(397): hbase.rootdir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179, hbase.cluster.distributed=false 2016-08-18 10:05:32,745 DEBUG [main] impl.BackupManager(134): Added log cleaner: org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-18 10:05:32,745 DEBUG [main] impl.BackupManager(135): Added master procedure manager: org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager 2016-08-18 10:05:32,745 DEBUG [main] impl.BackupManager(136): Added master observer: org.apache.hadoop.hbase.backup.master.BackupController 2016-08-18 10:05:32,784 INFO [main] master.HMaster(1719): Adding backup master ZNode /1/backup-masters/10.22.9.171,59396,1471539932179 2016-08-18 10:05:32,813 DEBUG [main] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/backup-masters/10.22.9.171,59396,1471539932179 2016-08-18 10:05:32,818 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/master 2016-08-18 10:05:32,819 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/master 2016-08-18 10:05:32,820 INFO [10.22.9.171:59396.activeMasterManager] master.ActiveMasterManager(170): Deleting ZNode for /1/backup-masters/10.22.9.171,59396,1471539932179 from backup master directory 2016-08-18 10:05:32,821 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/master 2016-08-18 10:05:32,821 DEBUG [main-EventThread] master.ActiveMasterManager(126): A master is now available 2016-08-18 10:05:32,821 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/backup-masters/10.22.9.171,59396,1471539932179 2016-08-18 10:05:32,829 WARN [10.22.9.171:59396.activeMasterManager] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-18 10:05:32,829 INFO [10.22.9.171:59396.activeMasterManager] master.ActiveMasterManager(179): Registered Active Master=10.22.9.171,59396,1471539932179 2016-08-18 10:05:32,864 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-18 10:05:32,866 INFO [main] client.ConnectionUtils(106): regionserver//10.22.9.171:0 server-side HConnection retries=350 2016-08-18 10:05:32,866 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-18 10:05:32,868 INFO [main] ipc.RpcServer$Listener(635): regionserver//10.22.9.171:0: started 3 reader(s) listening on port=59399 2016-08-18 10:05:32,875 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:32,875 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:32,877 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:32,879 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:59399 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:32,882 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:593990x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:32,882 DEBUG [main] zookeeper.ZKUtil(365): regionserver:593990x0, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/master 2016-08-18 10:05:32,883 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): regionserver:59399-0x1569e9d55410001 connected 2016-08-18 10:05:32,883 DEBUG [main] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-18 10:05:32,884 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-18 10:05:32,884 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-18 10:05:32,884 INFO [RpcServer.listener,port=59399] ipc.RpcServer$Listener(769): RpcServer.listener,port=59399: starting 2016-08-18 10:05:32,884 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-18 10:05:32,884 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-18 10:05:32,885 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-18 10:05:32,885 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-18 10:05:32,885 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-18 10:05:32,885 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-18 10:05:32,885 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-18 10:05:32,886 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-18 10:05:32,886 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-18 10:05:32,886 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-18 10:05:32,886 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-18 10:05:32,886 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-18 10:05:32,937 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:05:32,940 DEBUG [10.22.9.171:59396.activeMasterManager] util.FSUtils(901): Created cluster ID file at hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/hbase.id with ID: 3d5f82dd-d1d3-4a46-84e7-df3a033fc67d 2016-08-18 10:05:33,096 INFO [M:0;10.22.9.171:59396] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x685fbfed connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:33,096 INFO [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x66319623 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:33,100 DEBUG [M:0;10.22.9.171:59396-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x685fbfed0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:33,100 DEBUG [RS:0;10.22.9.171:59399-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x663196230x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:33,101 INFO [M:0;10.22.9.171:59396] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-18 10:05:33,101 INFO [RS:0;10.22.9.171:59399] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-18 10:05:33,101 DEBUG [M:0;10.22.9.171:59396] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-18 10:05:33,101 DEBUG [RS:0;10.22.9.171:59399] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-18 10:05:33,101 DEBUG [M:0;10.22.9.171:59396-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x685fbfed-0x1569e9d55410002 connected 2016-08-18 10:05:33,102 DEBUG [RS:0;10.22.9.171:59399-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x66319623-0x1569e9d55410003 connected 2016-08-18 10:05:33,150 INFO [10.22.9.171:59396.activeMasterManager] master.MasterFileSystem(528): BOOTSTRAP: creating hbase:meta region 2016-08-18 10:05:33,160 INFO [10.22.9.171:59396.activeMasterManager] regionserver.HRegion(6162): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, {NAME => 'table', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 Table name == hbase:meta 2016-08-18 10:05:33,163 DEBUG [M:0;10.22.9.171:59396] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f0a7794, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:05:33,163 DEBUG [RS:0;10.22.9.171:59399] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78309d42, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:05:33,163 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:33,163 DEBUG [RS:0;10.22.9.171:59399] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:33,164 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(138): Create NioEventLoopGroup with maxThreads = 0 2016-08-18 10:05:33,166 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(113): Create global event loop group NioEventLoopGroup 2016-08-18 10:05:33,166 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:33,166 DEBUG [RS:0;10.22.9.171:59399] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:33,242 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:05:33,247 DEBUG [10.22.9.171:59396.activeMasterManager] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-18 10:05:33,413 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:33,495 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:33,507 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/info 2016-08-18 10:05:33,528 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:33,529 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:33,531 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/table 2016-08-18 10:05:33,603 DEBUG [10.22.9.171:59396.activeMasterManager] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740 2016-08-18 10:05:33,641 DEBUG [10.22.9.171:59396.activeMasterManager] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-18 10:05:33,650 DEBUG [10.22.9.171:59396.activeMasterManager] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:05:33,650 INFO [10.22.9.171:59396.activeMasterManager] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=2 2016-08-18 10:05:33,650 DEBUG [10.22.9.171:59396.activeMasterManager] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-18 10:05:33,651 DEBUG [10.22.9.171:59396.activeMasterManager] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-18 10:05:33,652 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-18 10:05:33,652 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-18 10:05:33,652 INFO [10.22.9.171:59396.activeMasterManager] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-18 10:05:33,786 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:05:33,789 DEBUG [10.22.9.171:59396.activeMasterManager] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2016-08-18 10:05:33,802 INFO [10.22.9.171:59396.activeMasterManager] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:33,840 INFO [10.22.9.171:59396.activeMasterManager] coordination.ZKSplitLogManagerCoordination(599): Found 0 orphan tasks and 0 rescan nodes 2016-08-18 10:05:33,841 DEBUG [10.22.9.171:59396.activeMasterManager] util.FSTableDescriptors(222): Fetching table descriptors from the filesystem. 2016-08-18 10:05:34,056 INFO [10.22.9.171:59396.activeMasterManager] balancer.StochasticLoadBalancer(156): loading config 2016-08-18 10:05:34,102 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/balancer 2016-08-18 10:05:34,111 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/normalizer 2016-08-18 10:05:34,117 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/split 2016-08-18 10:05:34,118 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/merge 2016-08-18 10:05:34,274 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-08-18 10:05:34,274 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-08-18 10:05:34,275 INFO [10.22.9.171:59396.activeMasterManager] master.HMaster(620): Server active/primary master=10.22.9.171,59396,1471539932179, sessionid=0x1569e9d55410000, setting cluster-up flag (Was=false) 2016-08-18 10:05:34,277 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(813): ClusterId : 3d5f82dd-d1d3-4a46-84e7-df3a033fc67d 2016-08-18 10:05:34,277 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(813): ClusterId : 3d5f82dd-d1d3-4a46-84e7-df3a033fc67d 2016-08-18 10:05:34,309 INFO [M:0;10.22.9.171:59396] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-18 10:05:34,309 INFO [RS:0;10.22.9.171:59399] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-18 10:05:34,355 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-18 10:05:34,355 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-18 10:05:34,362 INFO [10.22.9.171:59396.activeMasterManager] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager was loaded successfully. 2016-08-18 10:05:34,382 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc already exists 2016-08-18 10:05:34,383 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc/acquired already exists 2016-08-18 10:05:34,384 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc/reached already exists 2016-08-18 10:05:34,385 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc/abort already exists 2016-08-18 10:05:34,397 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-18 10:05:34,397 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-18 10:05:34,397 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-18 10:05:34,397 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-18 10:05:34,397 INFO [10.22.9.171:59396.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-18 10:05:34,398 DEBUG [M:0;10.22.9.171:59396] zookeeper.RecoverableZooKeeper(594): Node /1/online-snapshot/acquired already exists 2016-08-18 10:05:34,398 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/online-snapshot/acquired already exists 2016-08-18 10:05:34,399 DEBUG [10.22.9.171:59396.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.9.171,59396,1471539932179 2016-08-18 10:05:34,400 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-18 10:05:34,400 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-18 10:05:34,400 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-18 10:05:34,400 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-18 10:05:34,401 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc already exists 2016-08-18 10:05:34,402 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/acquired already exists 2016-08-18 10:05:34,403 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/reached already exists 2016-08-18 10:05:34,404 DEBUG [RS:0;10.22.9.171:59399] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/abort already exists 2016-08-18 10:05:34,404 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-18 10:05:34,404 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-18 10:05:34,409 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/rolllog-proc/acquired already exists 2016-08-18 10:05:34,410 INFO [10.22.9.171:59396.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-18 10:05:34,411 DEBUG [10.22.9.171:59396.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.9.171,59396,1471539932179 2016-08-18 10:05:34,412 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/flush-table-proc/acquired already exists 2016-08-18 10:05:34,412 INFO [10.22.9.171:59396.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/flush-table-proc/acquired /1/flush-table-proc/reached /1/flush-table-proc/abort 2016-08-18 10:05:34,413 DEBUG [10.22.9.171:59396.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.9.171,59396,1471539932179 2016-08-18 10:05:34,429 INFO [RS:0;10.22.9.171:59399] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-18 10:05:34,429 INFO [M:0;10.22.9.171:59396] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-18 10:05:34,489 INFO [RS:0;10.22.9.171:59399] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-18 10:05:34,489 INFO [M:0;10.22.9.171:59396] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-18 10:05:34,489 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-18 10:05:34,489 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-18 10:05:34,512 DEBUG [RS:0;10.22.9.171:59399] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@581cfec7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.9.171/10.22.9.171:0 2016-08-18 10:05:34,512 DEBUG [M:0;10.22.9.171:59396] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@581cfec7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.9.171/10.22.9.171:0 2016-08-18 10:05:34,512 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:34,512 DEBUG [RS:0;10.22.9.171:59399] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:34,512 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:34,512 DEBUG [RS:0;10.22.9.171:59399] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:34,523 DEBUG [M:0;10.22.9.171:59396] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:M:0;10.22.9.171:59396 2016-08-18 10:05:34,523 DEBUG [RS:0;10.22.9.171:59399] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;10.22.9.171:59399 2016-08-18 10:05:34,537 INFO [10.22.9.171:59396.activeMasterManager] master.MasterCoprocessorHost(91): System coprocessor loading is enabled 2016-08-18 10:05:34,555 INFO [10.22.9.171:59396.activeMasterManager] coprocessor.CoprocessorHost(161): System coprocessor org.apache.hadoop.hbase.backup.master.BackupController was loaded successfully with priority (536870911). 2016-08-18 10:05:34,566 DEBUG [10.22.9.171:59396.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-10.22.9.171:59396, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:34,566 DEBUG [10.22.9.171:59396.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-10.22.9.171:59396, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:34,566 DEBUG [10.22.9.171:59396.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-10.22.9.171:59396, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:34,567 DEBUG [10.22.9.171:59396.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-10.22.9.171:59396, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:34,567 DEBUG [10.22.9.171:59396.activeMasterManager] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-10.22.9.171:59396, corePoolSize=10, maxPoolSize=10 2016-08-18 10:05:34,567 DEBUG [10.22.9.171:59396.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-10.22.9.171:59396, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:34,575 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-18 10:05:34,575 DEBUG [M:0;10.22.9.171:59396] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59396,1471539932179 2016-08-18 10:05:34,576 DEBUG [RS:0;10.22.9.171:59399] zookeeper.ZKUtil(365): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:05:34,576 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:05:34,577 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59396,1471539932179 2016-08-18 10:05:34,578 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:05:34,578 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.22.9.171,59396,1471539932179 2016-08-18 10:05:34,599 INFO [RS:0;10.22.9.171:59399] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-18 10:05:34,599 INFO [M:0;10.22.9.171:59396] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-18 10:05:34,600 INFO [M:0;10.22.9.171:59396] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-18 10:05:34,600 INFO [RS:0;10.22.9.171:59399] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-18 10:05:34,600 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(2339): reportForDuty to master=10.22.9.171,59396,1471539932179 with port=59396, startcode=1471539932179 2016-08-18 10:05:34,607 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(2339): reportForDuty to master=10.22.9.171,59396,1471539932179 with port=59399, startcode=1471539932874 2016-08-18 10:05:34,620 DEBUG [M:0;10.22.9.171:59396] regionserver.HRegionServer(2358): Master is not running yet 2016-08-18 10:05:34,620 WARN [M:0;10.22.9.171:59396] regionserver.HRegionServer(941): reportForDuty failed; sleeping and then retrying. 2016-08-18 10:05:34,718 INFO [10.22.9.171:59396.activeMasterManager] procedure2.ProcedureExecutor(487): Starting procedure executor threads=9 2016-08-18 10:05:34,719 INFO [10.22.9.171:59396.activeMasterManager] wal.WALProcedureStore(296): Starting WAL Procedure Store lease recovery 2016-08-18 10:05:34,721 WARN [10.22.9.171:59396.activeMasterManager] wal.WALProcedureStore(941): Log directory not found: File hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/MasterProcWALs does not exist. 2016-08-18 10:05:34,760 DEBUG [10.22.9.171:59396.activeMasterManager] wal.WALProcedureStore(833): Roll new state log: 1 2016-08-18 10:05:34,763 INFO [10.22.9.171:59396.activeMasterManager] wal.WALProcedureStore(319): Lease acquired for flushLogId: 1 2016-08-18 10:05:34,764 DEBUG [10.22.9.171:59396.activeMasterManager] wal.WALProcedureStore(336): No state logs to replay. 2016-08-18 10:05:34,764 DEBUG [10.22.9.171:59396.activeMasterManager] procedure2.ProcedureExecutor$1(298): load procedures maxProcId=0 2016-08-18 10:05:34,787 DEBUG [10.22.9.171:59396.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-18 10:05:34,789 DEBUG [10.22.9.171:59396.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2016-08-18 10:05:34,789 INFO [10.22.9.171:59396.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:34,792 DEBUG [10.22.9.171:59396.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:34,794 DEBUG [10.22.9.171:59396.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(674): replicationLogCleaner-0x1569e9d55410004 connected 2016-08-18 10:05:34,852 DEBUG [10.22.9.171:59396.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2016-08-18 10:05:34,854 DEBUG [10.22.9.171:59396.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2016-08-18 10:05:34,872 DEBUG [10.22.9.171:59396.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2016-08-18 10:05:34,874 DEBUG [10.22.9.171:59396.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2016-08-18 10:05:34,874 INFO [10.22.9.171:59396.activeMasterManager] master.ServerManager(1008): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of 1500 ms. 2016-08-18 10:05:34,874 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(2339): reportForDuty to master=10.22.9.171,59396,1471539932179 with port=59396, startcode=1471539932179 2016-08-18 10:05:34,927 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59412; # active connections: 1 2016-08-18 10:05:35,017 INFO [M:0;10.22.9.171:59396] master.ServerManager(456): Registering server=10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,031 INFO [10.22.9.171:59396.activeMasterManager] master.ServerManager(1025): Finished waiting for region servers count to settle; checked in 1, slept for 157 ms, expecting minimum of 1, maximum of 1, master is running 2016-08-18 10:05:35,032 INFO [10.22.9.171:59396.activeMasterManager] master.ServerManager(456): Registering server=10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,032 INFO [10.22.9.171:59396.activeMasterManager] master.HMaster(710): Registered server found up in zk but who has not yet reported in: 10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,037 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 2016-08-18 10:05:35,037 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:59388 2016-08-18 10:05:35,037 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-18 10:05:35,037 WARN [M:0;10.22.9.171:59396] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-18 10:05:35,037 INFO [M:0;10.22.9.171:59396] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:35,038 DEBUG [M:0;10.22.9.171:59396] regionserver.HRegionServer(1654): logdir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,048 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/meta-region-server because node does not exist (not an error) 2016-08-18 10:05:35,052 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-08-18 10:05:35,127 DEBUG [M:0;10.22.9.171:59396] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-18 10:05:35,151 INFO [M:0;10.22.9.171:59396] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-18 10:05:35,155 INFO [M:0;10.22.9.171:59396] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-18 10:05:35,160 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu.hfs.0 (auth:SIMPLE) 2016-08-18 10:05:35,163 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59412 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:05:35,199 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.ServerManager(456): Registering server=10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,204 INFO [M:0;10.22.9.171:59396] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-18 10:05:35,231 DEBUG [M:0;10.22.9.171:59396] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.9.171:59396, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:35,232 DEBUG [M:0;10.22.9.171:59396] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.9.171:59396, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:35,232 DEBUG [M:0;10.22.9.171:59396] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.9.171:59396, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:35,232 DEBUG [M:0;10.22.9.171:59396] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.9.171:59396, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:35,232 DEBUG [M:0;10.22.9.171:59396] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.9.171:59396, corePoolSize=2, maxPoolSize=2 2016-08-18 10:05:35,232 DEBUG [M:0;10.22.9.171:59396] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59396, corePoolSize=10, maxPoolSize=10 2016-08-18 10:05:35,232 DEBUG [M:0;10.22.9.171:59396] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.9.171:59396, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:35,235 DEBUG [M:0;10.22.9.171:59396] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,236 DEBUG [M:0;10.22.9.171:59396] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,236 INFO [M:0;10.22.9.171:59396] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.9.171,59396,1471539932179] other RSs: [10.22.9.171,59399,1471539932874, 10.22.9.171,59396,1471539932179] 2016-08-18 10:05:35,238 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 2016-08-18 10:05:35,239 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:59388 2016-08-18 10:05:35,239 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-18 10:05:35,239 WARN [RS:0;10.22.9.171:59399] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-18 10:05:35,239 INFO [RS:0;10.22.9.171:59399] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:35,239 DEBUG [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1654): logdir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,245 DEBUG [RS:0;10.22.9.171:59399] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-18 10:05:35,245 INFO [RS:0;10.22.9.171:59399] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-18 10:05:35,246 INFO [RS:0;10.22.9.171:59399] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-18 10:05:35,246 INFO [RS:0;10.22.9.171:59399] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-18 10:05:35,248 DEBUG [RS:0;10.22.9.171:59399] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.9.171:59399, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:35,249 DEBUG [RS:0;10.22.9.171:59399] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.9.171:59399, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:35,249 DEBUG [RS:0;10.22.9.171:59399] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.9.171:59399, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:35,249 DEBUG [RS:0;10.22.9.171:59399] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.9.171:59399, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:35,249 DEBUG [RS:0;10.22.9.171:59399] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.9.171:59399, corePoolSize=2, maxPoolSize=2 2016-08-18 10:05:35,249 DEBUG [RS:0;10.22.9.171:59399] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399, corePoolSize=10, maxPoolSize=10 2016-08-18 10:05:35,250 DEBUG [RS:0;10.22.9.171:59399] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.9.171:59399, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:35,252 DEBUG [RS:0;10.22.9.171:59399] zookeeper.ZKUtil(365): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,252 DEBUG [RS:0;10.22.9.171:59399] zookeeper.ZKUtil(365): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rs/10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,253 INFO [RS:0;10.22.9.171:59399] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.9.171,59399,1471539932874, 10.22.9.171,59396,1471539932179] other RSs: [10.22.9.171,59399,1471539932874, 10.22.9.171,59396,1471539932179] 2016-08-18 10:05:35,317 INFO [SplitLogWorker-10.22.9.171:59396] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.9.171,59396,1471539932179 starting 2016-08-18 10:05:35,317 INFO [SplitLogWorker-10.22.9.171:59399] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.9.171,59399,1471539932874 starting 2016-08-18 10:05:35,341 INFO [M:0;10.22.9.171:59396] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-18 10:05:35,341 INFO [RS:0;10.22.9.171:59399] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-18 10:05:35,354 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1412): Serving as 10.22.9.171,59399,1471539932874, RpcServer on 10.22.9.171/10.22.9.171:59399, sessionid=0x1569e9d55410001 2016-08-18 10:05:35,354 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1412): Serving as 10.22.9.171,59396,1471539932179, RpcServer on 10.22.9.171/10.22.9.171:59396, sessionid=0x1569e9d55410000 2016-08-18 10:05:35,354 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-18 10:05:35,354 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-18 10:05:35,354 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59399,1471539932874' 2016-08-18 10:05:35,355 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:05:35,354 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59396,1471539932179' 2016-08-18 10:05:35,355 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:05:35,355 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:05:35,356 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:05:35,356 INFO [RS:0;10.22.9.171:59399] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-18 10:05:35,356 INFO [M:0;10.22.9.171:59396] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-18 10:05:35,357 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-18 10:05:35,357 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-18 10:05:35,356 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-18 10:05:35,357 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-18 10:05:35,357 DEBUG [M:0;10.22.9.171:59396] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,357 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59396,1471539932179' 2016-08-18 10:05:35,357 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:05:35,357 DEBUG [RS:0;10.22.9.171:59399] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,357 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59399,1471539932874' 2016-08-18 10:05:35,357 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:05:35,358 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:05:35,358 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:05:35,358 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-18 10:05:35,358 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-18 10:05:35,358 DEBUG [M:0;10.22.9.171:59396] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,358 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59396,1471539932179' 2016-08-18 10:05:35,359 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-18 10:05:35,359 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-18 10:05:35,359 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-08-18 10:05:35,359 DEBUG [RS:0;10.22.9.171:59399] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.9.171,59399,1471539932874 2016-08-18 10:05:35,359 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59399,1471539932874' 2016-08-18 10:05:35,359 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-08-18 10:05:35,359 DEBUG [M:0;10.22.9.171:59396] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-08-18 10:05:35,360 DEBUG [RS:0;10.22.9.171:59399] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-08-18 10:05:35,360 DEBUG [M:0;10.22.9.171:59396] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-18 10:05:35,360 DEBUG [RS:0;10.22.9.171:59399] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-18 10:05:35,378 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/meta-region-server because node does not exist (not an error) 2016-08-18 10:05:35,378 INFO [10.22.9.171:59396.activeMasterManager] master.HMaster(938): Re-assigning hbase:meta with replicaId, 0 it was on null 2016-08-18 10:05:35,405 DEBUG [10.22.9.171:59396.activeMasterManager] master.AssignmentManager(1291): No previous transition plan found (or ignoring an existing plan) for hbase:meta,,1.1588230740; generated random plan=hri=hbase:meta,,1.1588230740, src=, dest=10.22.9.171,59396,1471539932179; 2 (online=2) available servers, forceNewPlan=false 2016-08-18 10:05:35,405 INFO [10.22.9.171:59396.activeMasterManager] master.AssignmentManager(1080): Assigning hbase:meta,,1.1588230740 to 10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,405 INFO [10.22.9.171:59396.activeMasterManager] master.RegionStates(1106): Transition {1588230740 state=OFFLINE, ts=1471539935379, server=null} to {1588230740 state=PENDING_OPEN, ts=1471539935405, server=10.22.9.171,59396,1471539932179} 2016-08-18 10:05:35,405 INFO [10.22.9.171:59396.activeMasterManager] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,411 INFO [M:0;10.22.9.171:59396] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-18 10:05:35,411 INFO [RS:0;10.22.9.171:59399] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-18 10:05:35,430 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.MetaTableLocator(451): META region location doesn't exist, create it 2016-08-18 10:05:35,432 DEBUG [10.22.9.171:59396.activeMasterManager] master.ServerManager(934): New admin connection to 10.22.9.171,59396,1471539932179 2016-08-18 10:05:35,582 INFO [10.22.9.171:59396.activeMasterManager] regionserver.RSRpcServices(1666): Open hbase:meta,,1.1588230740 2016-08-18 10:05:35,603 INFO [RS_OPEN_META-10.22.9.171:59396-0] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-18 10:05:35,603 INFO [RS_OPEN_META-10.22.9.171:59396-0] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-18 10:05:35,694 INFO [RS_OPEN_META-10.22.9.171:59396-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0, suffix=, logDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta, archiveDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:05:35,716 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:35,724 INFO [RS_OPEN_META-10.22.9.171:59396-0] wal.FSHLog(1436): Slow sync cost: 7 ms, current pipeline: [] 2016-08-18 10:05:35,725 INFO [RS_OPEN_META-10.22.9.171:59396-0] wal.FSHLog(890): New WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:35,762 DEBUG [10.22.9.171:59396.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539935691,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-18 10:05:35,774 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] regionserver.HRegion(6339): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:05:35,847 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] coprocessor.CoprocessorHost(181): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2016-08-18 10:05:35,874 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] regionserver.HRegion(7445): Registered coprocessor service: region=hbase:meta,,1 service=hbase.pb.MultiRowMutationService 2016-08-18 10:05:35,888 INFO [RS_OPEN_META-10.22.9.171:59396-0] regionserver.RegionCoprocessorHost(376): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2016-08-18 10:05:35,922 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table meta 1588230740 2016-08-18 10:05:35,922 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-18 10:05:35,945 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:35,946 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:35,947 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/info 2016-08-18 10:05:35,949 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:35,950 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:35,951 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/table 2016-08-18 10:05:35,956 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740 2016-08-18 10:05:35,959 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-18 10:05:35,967 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/recovered.edits/3.seqid to file, newSeqId=3, maxSeqId=2 2016-08-18 10:05:35,968 INFO [RS_OPEN_META-10.22.9.171:59396-0] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=3 2016-08-18 10:05:36,069 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:36,073 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:meta,,1.1588230740 2016-08-18 10:05:36,126 DEBUG [PostOpenDeployTasks:1588230740] master.AssignmentManager(2884): Got transition OPENED for {1588230740 state=PENDING_OPEN, ts=1471539935405, server=10.22.9.171,59396,1471539932179} from 10.22.9.171,59396,1471539932179 2016-08-18 10:05:36,126 INFO [PostOpenDeployTasks:1588230740] master.RegionStates(1106): Transition {1588230740 state=PENDING_OPEN, ts=1471539935405, server=10.22.9.171,59396,1471539932179} to {1588230740 state=OPEN, ts=1471539936126, server=10.22.9.171,59396,1471539932179} 2016-08-18 10:05:36,127 INFO [PostOpenDeployTasks:1588230740] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.9.171,59396,1471539932179 2016-08-18 10:05:36,129 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/meta-region-server 2016-08-18 10:05:36,129 DEBUG [PostOpenDeployTasks:1588230740] master.RegionStates(452): Onlined 1588230740 on 10.22.9.171,59396,1471539932179 2016-08-18 10:05:36,139 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:meta,,1.1588230740 2016-08-18 10:05:36,140 DEBUG [RS_OPEN_META-10.22.9.171:59396-0] handler.OpenRegionHandler(126): Opened hbase:meta,,1.1588230740 on 10.22.9.171,59396,1471539932179 2016-08-18 10:05:36,418 INFO [M:0;10.22.9.171:59396] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59396%2C1471539932179.regiongroup-0, suffix=, logDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179, archiveDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:05:36,418 INFO [RS:0;10.22.9.171:59399] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59399%2C1471539932874.regiongroup-0, suffix=, logDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874, archiveDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:05:36,421 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:05:36,422 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:05:36,428 INFO [M:0;10.22.9.171:59396] wal.FSHLog(1436): Slow sync cost: 6 ms, current pipeline: [] 2016-08-18 10:05:36,429 INFO [M:0;10.22.9.171:59396] wal.FSHLog(890): New WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:05:36,430 INFO [RS:0;10.22.9.171:59399] wal.FSHLog(1436): Slow sync cost: 8 ms, current pipeline: [] 2016-08-18 10:05:36,430 INFO [RS:0;10.22.9.171:59399] wal.FSHLog(890): New WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:05:36,660 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:36,719 INFO [10.22.9.171:59396.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-18 10:05:36,720 DEBUG [10.22.9.171:59396.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539936720,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-18 10:05:36,722 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:36,723 INFO [10.22.9.171:59396.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-18 10:05:37,032 DEBUG [10.22.9.171:59396.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(server crash processing) 2016-08-18 10:05:37,032 INFO [10.22.9.171:59396.activeMasterManager] master.ServerManager(683): AssignmentManager hasn't finished failover cleanup; waiting 2016-08-18 10:05:37,034 INFO [10.22.9.171:59396.activeMasterManager] master.HMaster(965): hbase:meta with replicaId 0 assigned=1, location=10.22.9.171,59396,1471539932179 2016-08-18 10:05:37,053 INFO [10.22.9.171:59396.activeMasterManager] master.AssignmentManager(555): Clean cluster startup. Don't reassign user regions 2016-08-18 10:05:37,056 INFO [10.22.9.171:59396.activeMasterManager] master.AssignmentManager(425): Joined the cluster in 13ms, failover=false 2016-08-18 10:05:37,058 DEBUG [10.22.9.171:59396.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/info 2016-08-18 10:05:37,059 DEBUG [10.22.9.171:59396.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/table 2016-08-18 10:05:37,180 INFO [10.22.9.171:59396.activeMasterManager] master.TableNamespaceManager(93): Namespace table not found. Creating... 2016-08-18 10:05:37,483 DEBUG [10.22.9.171:59396.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:05:37,565 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:namespace/write-master:593960000000000 2016-08-18 10:05:37,692 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:05:37,695 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2016-08-18 10:05:37,707 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6162): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == hbase:namespace 2016-08-18 10:05:37,719 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:05:37,720 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(736): Instantiated hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:05:37,721 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1419): Closing hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4.: disabling compactions & flushes 2016-08-18 10:05:37,721 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:05:37,721 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1552): Closed hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:05:37,842 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":41}]},"row":"hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4."} 2016-08-18 10:05:37,844 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:37,845 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:05:37,957 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59396,1471539932179 2016-08-18 10:05:37,959 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table hbase:namespace state org.apache.hadoop.hbase.TableNotFoundException: hbase:namespace at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:05:37,960 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {83a4988679dc2f377c4e4a129e3ecec4 state=OFFLINE, ts=1471539937957, server=null} to {83a4988679dc2f377c4e4a129e3ecec4 state=PENDING_OPEN, ts=1471539937960, server=10.22.9.171,59396,1471539932179} 2016-08-18 10:05:37,961 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. with state=PENDING_OPEN, sn=10.22.9.171,59396,1471539932179 2016-08-18 10:05:37,961 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:37,963 INFO [ProcedureExecutor-0] regionserver.RSRpcServices(1666): Open hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:05:37,969 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59396,1471539932179 2016-08-18 10:05:37,970 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539937970,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:namespace"} 2016-08-18 10:05:37,971 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:37,972 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table hbase:namespace state to ENABLED in META 2016-08-18 10:05:37,974 INFO [RS_OPEN_REGION-10.22.9.171:59396-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59396%2C1471539932179.regiongroup-1, suffix=, logDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179, archiveDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:05:37,977 DEBUG [RS_OPEN_REGION-10.22.9.171:59396-0] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:37,982 INFO [RS_OPEN_REGION-10.22.9.171:59396-0] wal.FSHLog(1436): Slow sync cost: 4 ms, current pipeline: [] 2016-08-18 10:05:37,982 INFO [RS_OPEN_REGION-10.22.9.171:59396-0] wal.FSHLog(890): New WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:37,983 DEBUG [RS_OPEN_REGION-10.22.9.171:59396-0] regionserver.HRegion(6339): Opening region: {ENCODED => 83a4988679dc2f377c4e4a129e3ecec4, NAME => 'hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:05:37,984 DEBUG [RS_OPEN_REGION-10.22.9.171:59396-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table namespace 83a4988679dc2f377c4e4a129e3ecec4 2016-08-18 10:05:37,984 DEBUG [RS_OPEN_REGION-10.22.9.171:59396-0] regionserver.HRegion(736): Instantiated hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:05:37,988 INFO [StoreOpener-83a4988679dc2f377c4e4a129e3ecec4-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:37,989 INFO [StoreOpener-83a4988679dc2f377c4e4a129e3ecec4-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:37,990 DEBUG [StoreOpener-83a4988679dc2f377c4e4a129e3ecec4-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/info 2016-08-18 10:05:37,991 DEBUG [RS_OPEN_REGION-10.22.9.171:59396-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4 2016-08-18 10:05:37,999 DEBUG [RS_OPEN_REGION-10.22.9.171:59396-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:05:37,999 INFO [RS_OPEN_REGION-10.22.9.171:59396-0] regionserver.HRegion(871): Onlined 83a4988679dc2f377c4e4a129e3ecec4; next sequenceid=2 2016-08-18 10:05:37,999 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:38,001 INFO [PostOpenDeployTasks:83a4988679dc2f377c4e4a129e3ecec4] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:05:38,001 DEBUG [PostOpenDeployTasks:83a4988679dc2f377c4e4a129e3ecec4] master.AssignmentManager(2884): Got transition OPENED for {83a4988679dc2f377c4e4a129e3ecec4 state=PENDING_OPEN, ts=1471539937960, server=10.22.9.171,59396,1471539932179} from 10.22.9.171,59396,1471539932179 2016-08-18 10:05:38,001 INFO [PostOpenDeployTasks:83a4988679dc2f377c4e4a129e3ecec4] master.RegionStates(1106): Transition {83a4988679dc2f377c4e4a129e3ecec4 state=PENDING_OPEN, ts=1471539937960, server=10.22.9.171,59396,1471539932179} to {83a4988679dc2f377c4e4a129e3ecec4 state=OPEN, ts=1471539938001, server=10.22.9.171,59396,1471539932179} 2016-08-18 10:05:38,001 INFO [PostOpenDeployTasks:83a4988679dc2f377c4e4a129e3ecec4] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. with state=OPEN, openSeqNum=2, server=10.22.9.171,59396,1471539932179 2016-08-18 10:05:38,002 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:38,003 DEBUG [PostOpenDeployTasks:83a4988679dc2f377c4e4a129e3ecec4] master.RegionStates(452): Onlined 83a4988679dc2f377c4e4a129e3ecec4 on 10.22.9.171,59396,1471539932179 2016-08-18 10:05:38,007 DEBUG [PostOpenDeployTasks:83a4988679dc2f377c4e4a129e3ecec4] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:05:38,007 DEBUG [RS_OPEN_REGION-10.22.9.171:59396-0] handler.OpenRegionHandler(126): Opened hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. on 10.22.9.171,59396,1471539932179 2016-08-18 10:05:38,031 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/namespace 2016-08-18 10:05:38,033 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/namespace 2016-08-18 10:05:38,196 DEBUG [10.22.9.171:59396.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:38,296 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:namespace/write-master:593960000000000 2016-08-18 10:05:38,297 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 1.0030sec: CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=FINISHED 2016-08-18 10:05:38,529 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:38,642 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-18 10:05:38,646 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:38,857 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 680msec: CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=FINISHED 2016-08-18 10:05:38,977 DEBUG [10.22.9.171:59396.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:39,195 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:39,305 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-18 10:05:39,308 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:39,308 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-18 10:05:39,520 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 543msec: CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=FINISHED 2016-08-18 10:05:39,533 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/namespace/default already exists 2016-08-18 10:05:39,534 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/default 2016-08-18 10:05:39,535 DEBUG [10.22.9.171:59396.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /1/namespace/hbase already exists 2016-08-18 10:05:39,536 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/hbase 2016-08-18 10:05:39,536 INFO [10.22.9.171:59396.activeMasterManager] master.HMaster(807): Master has completed initialization 2016-08-18 10:05:39,536 DEBUG [10.22.9.171:59396.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(master initialized) 2016-08-18 10:05:39,549 INFO [10.22.9.171:59396.activeMasterManager] quotas.MasterQuotaManager(72): Quota support disabled 2016-08-18 10:05:39,549 INFO [10.22.9.171:59396.activeMasterManager] zookeeper.ZooKeeperWatcher(225): not a secure deployment, proceeding 2016-08-18 10:05:39,561 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4e55e523 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:39,563 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4e55e5230x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:39,564 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@400eb33d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:05:39,564 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:39,564 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x4e55e523-0x1569e9d55410005 connected 2016-08-18 10:05:39,564 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:39,579 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:05:39,579 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59423; # active connections: 2 2016-08-18 10:05:39,580 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:05:39,580 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59423 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:05:39,636 INFO [10.22.9.171:59396.activeMasterManager] master.HMaster(1495): Client=null/null create 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:05:39,661 INFO [main] hbase.HBaseTestingUtility(1089): Minicluster is up 2016-08-18 10:05:39,661 INFO [main] hbase.HBaseTestingUtility(1263): The hbase.fs.tmp.dir is set to /user/tyu/hbase-staging 2016-08-18 10:05:39,661 INFO [main] hbase.HBaseTestingUtility(1013): Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s) 2016-08-18 10:05:39,679 INFO [main] hbase.HBaseTestingUtility(428): System.getProperty("hadoop.log.dir") already set to: /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_logs so I do NOT create it in target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346 2016-08-18 10:05:39,679 WARN [main] hbase.HBaseTestingUtility(432): hadoop.log.dir property value differs in configuration and system: Configuration=/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop-log-dir while System=/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_logs Erasing configuration value by system value. 2016-08-18 10:05:39,679 INFO [main] hbase.HBaseTestingUtility(428): System.getProperty("hadoop.tmp.dir") already set to: /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_tmp so I do NOT create it in target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346 2016-08-18 10:05:39,679 WARN [main] hbase.HBaseTestingUtility(432): hadoop.tmp.dir property value differs in configuration and system: Configuration=/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop-tmp-dir while System=/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_tmp Erasing configuration value by system value. 2016-08-18 10:05:39,679 INFO [main] hbase.HBaseTestingUtility(496): Created new mini-cluster data directory: /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/dfscluster_8e70a1d2-0197-4e0b-ad8b-c57c3755930d, deleteOnExit=true 2016-08-18 10:05:39,680 INFO [main] hbase.HBaseTestingUtility(743): Setting test.cache.data to /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/cache_data in system properties and HBase conf 2016-08-18 10:05:39,680 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.tmp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop_tmp in system properties and HBase conf 2016-08-18 10:05:39,680 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.log.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop_logs in system properties and HBase conf 2016-08-18 10:05:39,680 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.local.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/mapred_local in system properties and HBase conf 2016-08-18 10:05:39,680 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.temp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/mapred_temp in system properties and HBase conf 2016-08-18 10:05:39,680 INFO [main] hbase.HBaseTestingUtility(734): read short circuit is OFF 2016-08-18 10:05:39,681 DEBUG [main] fs.HFileSystem(221): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2016-08-18 10:05:39,717 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:39,719 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.3/hadoop-hdfs-2.7.3-tests.jar!/webapps/hdfs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_59424_hdfs____r2ejgc/webapp 2016-08-18 10:05:39,738 DEBUG [10.22.9.171:59396.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:05:39,743 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:593960000000000 2016-08-18 10:05:39,744 INFO [10.22.9.171:59396.activeMasterManager] master.BackupController(51): Created hbase:backup table 2016-08-18 10:05:39,798 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:59424 2016-08-18 10:05:39,864 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:05:39,872 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/hbase/backup/.tabledesc/.tableinfo.0000000001 2016-08-18 10:05:39,874 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(6162): creating HRegion hbase:backup HTD == 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == hbase:backup 2016-08-18 10:05:39,885 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:05:39,886 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(736): Instantiated hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:05:39,886 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1419): Closing hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4.: disabling compactions & flushes 2016-08-18 10:05:39,886 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:05:39,886 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1552): Closed hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:05:39,907 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:39,911 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.3/hadoop-hdfs-2.7.3-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_59429_datanode____.14svnm/webapp 2016-08-18 10:05:39,987 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:59429 2016-08-18 10:05:39,994 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":38}]},"row":"hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4."} 2016-08-18 10:05:39,996 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:39,998 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:05:40,076 INFO [Block report processor] blockmanagement.BlockManager(1883): BLOCK* processReport: from storage DS-564fd608-c77e-48a6-a605-76fa80892254 node DatanodeRegistration(127.0.0.1:59428, datanodeUuid=973a9a42-4f2d-41df-b94f-b6002d2955b4, infoPort=59430, infoSecurePort=0, ipcPort=59431, storageInfo=lv=-56;cid=testClusterID;nsid=1003389433;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-08-18 10:05:40,077 INFO [Block report processor] blockmanagement.BlockManager(1883): BLOCK* processReport: from storage DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525 node DatanodeRegistration(127.0.0.1:59428, datanodeUuid=973a9a42-4f2d-41df-b94f-b6002d2955b4, infoPort=59430, infoSecurePort=0, ipcPort=59431, storageInfo=lv=-56;cid=testClusterID;nsid=1003389433;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-08-18 10:05:40,104 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,105 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table hbase:backup state org.apache.hadoop.hbase.TableNotFoundException: hbase:backup at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:05:40,105 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {97fff8dc57d09226ac34540d2bf674e4 state=OFFLINE, ts=1471539940104, server=null} to {97fff8dc57d09226ac34540d2bf674e4 state=PENDING_OPEN, ts=1471539940105, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:05:40,105 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,106 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:40,107 DEBUG [ProcedureExecutor-3] master.ServerManager(934): New admin connection to 10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,112 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:40,115 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:40,116 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service AdminService, sasl=false 2016-08-18 10:05:40,116 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59434; # active connections: 1 2016-08-18 10:05:40,117 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:05:40,117 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59434 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:05:40,118 INFO [PriorityRpcServer.handler=0,queue=0,port=59399] regionserver.RSRpcServices(1666): Open hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:05:40,126 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,126 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539940126,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:backup"} 2016-08-18 10:05:40,128 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:40,129 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table hbase:backup state to ENABLED in META 2016-08-18 10:05:40,130 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59399%2C1471539932874.regiongroup-1, suffix=, logDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874, archiveDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:05:40,134 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 0 2016-08-18 10:05:40,134 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:05:40,137 INFO [main] util.FSUtils(749): Created version file at hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437 with version=8 2016-08-18 10:05:40,138 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-18 10:05:40,139 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] wal.FSHLog(1436): Slow sync cost: 5 ms, current pipeline: [] 2016-08-18 10:05:40,140 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] wal.FSHLog(890): New WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:05:40,140 INFO [main] client.ConnectionUtils(106): master//10.22.9.171:0 server-side HConnection retries=350 2016-08-18 10:05:40,141 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-18 10:05:40,141 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(6339): Opening region: {ENCODED => 97fff8dc57d09226ac34540d2bf674e4, NAME => 'hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:05:40,142 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table backup 97fff8dc57d09226ac34540d2bf674e4 2016-08-18 10:05:40,142 INFO [main] ipc.RpcServer$Listener(635): master//10.22.9.171:0: started 3 reader(s) listening on port=59437 2016-08-18 10:05:40,143 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(736): Instantiated hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:05:40,145 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,145 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,147 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:40,148 INFO [StoreOpener-97fff8dc57d09226ac34540d2bf674e4-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,149 INFO [StoreOpener-97fff8dc57d09226ac34540d2bf674e4-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:40,150 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=master:59437 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:40,151 DEBUG [StoreOpener-97fff8dc57d09226ac34540d2bf674e4-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/meta 2016-08-18 10:05:40,152 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:594370x0, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:40,154 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): master:59437-0x1569e9d55410006 connected 2016-08-18 10:05:40,155 INFO [StoreOpener-97fff8dc57d09226ac34540d2bf674e4-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,156 INFO [StoreOpener-97fff8dc57d09226ac34540d2bf674e4-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:40,157 DEBUG [StoreOpener-97fff8dc57d09226ac34540d2bf674e4-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/session 2016-08-18 10:05:40,158 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4 2016-08-18 10:05:40,162 DEBUG [main] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-08-18 10:05:40,163 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:backup, use config (67108864) instead 2016-08-18 10:05:40,163 DEBUG [main] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-18 10:05:40,163 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-18 10:05:40,163 INFO [RpcServer.listener,port=59437] ipc.RpcServer$Listener(769): RpcServer.listener,port=59437: starting 2016-08-18 10:05:40,163 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-18 10:05:40,164 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-18 10:05:40,164 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-18 10:05:40,164 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-18 10:05:40,165 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-18 10:05:40,165 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-18 10:05:40,165 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-18 10:05:40,165 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-18 10:05:40,165 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-18 10:05:40,166 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-18 10:05:40,166 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-18 10:05:40,166 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-18 10:05:40,166 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-18 10:05:40,167 INFO [main] master.HMaster(397): hbase.rootdir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437, hbase.cluster.distributed=false 2016-08-18 10:05:40,167 DEBUG [main] impl.BackupManager(134): Added log cleaner: org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-18 10:05:40,168 DEBUG [main] impl.BackupManager(135): Added master procedure manager: org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager 2016-08-18 10:05:40,168 DEBUG [main] impl.BackupManager(136): Added master observer: org.apache.hadoop.hbase.backup.master.BackupController 2016-08-18 10:05:40,168 INFO [main] master.HMaster(1719): Adding backup master ZNode /2/backup-masters/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,168 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:05:40,168 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(871): Onlined 97fff8dc57d09226ac34540d2bf674e4; next sequenceid=2 2016-08-18 10:05:40,169 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:05:40,170 DEBUG [main] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/backup-masters/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,170 INFO [PostOpenDeployTasks:97fff8dc57d09226ac34540d2bf674e4] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:05:40,171 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/master 2016-08-18 10:05:40,172 DEBUG [PriorityRpcServer.handler=3,queue=1,port=59396] master.AssignmentManager(2884): Got transition OPENED for {97fff8dc57d09226ac34540d2bf674e4 state=PENDING_OPEN, ts=1471539940105, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,172 INFO [PriorityRpcServer.handler=3,queue=1,port=59396] master.RegionStates(1106): Transition {97fff8dc57d09226ac34540d2bf674e4 state=PENDING_OPEN, ts=1471539940105, server=10.22.9.171,59399,1471539932874} to {97fff8dc57d09226ac34540d2bf674e4 state=OPEN, ts=1471539940172, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:05:40,172 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/master 2016-08-18 10:05:40,172 INFO [PriorityRpcServer.handler=3,queue=1,port=59396] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,172 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/master 2016-08-18 10:05:40,173 INFO [10.22.9.171:59437.activeMasterManager] master.ActiveMasterManager(170): Deleting ZNode for /2/backup-masters/10.22.9.171,59437,1471539940144 from backup master directory 2016-08-18 10:05:40,172 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:05:40,173 DEBUG [main-EventThread] master.ActiveMasterManager(126): A master is now available 2016-08-18 10:05:40,173 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/backup-masters/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,173 WARN [10.22.9.171:59437.activeMasterManager] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-18 10:05:40,174 INFO [10.22.9.171:59437.activeMasterManager] master.ActiveMasterManager(179): Registered Active Master=10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,174 DEBUG [PriorityRpcServer.handler=3,queue=1,port=59396] master.RegionStates(452): Onlined 97fff8dc57d09226ac34540d2bf674e4 on 10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,176 DEBUG [PostOpenDeployTasks:97fff8dc57d09226ac34540d2bf674e4] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:05:40,177 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] handler.OpenRegionHandler(126): Opened hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. on 10.22.9.171,59399,1471539932874 2016-08-18 10:05:40,199 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 0 2016-08-18 10:05:40,202 DEBUG [main] impl.BackupManager(158): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager 2016-08-18 10:05:40,202 DEBUG [10.22.9.171:59437.activeMasterManager] util.FSUtils(901): Created cluster ID file at hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/hbase.id with ID: cb9fa8a5-abb1-4f07-a744-fca71e86e7e9 2016-08-18 10:05:40,204 INFO [main] client.ConnectionUtils(106): regionserver//10.22.9.171:0 server-side HConnection retries=350 2016-08-18 10:05:40,204 INFO [main] ipc.SimpleRpcScheduler(190): Using deadline as user call queue, count=1 2016-08-18 10:05:40,206 INFO [main] ipc.RpcServer$Listener(635): regionserver//10.22.9.171:0: started 3 reader(s) listening on port=59441 2016-08-18 10:05:40,207 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,208 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,208 INFO [10.22.9.171:59437.activeMasterManager] master.MasterFileSystem(528): BOOTSTRAP: creating hbase:meta region 2016-08-18 10:05:40,209 INFO [10.22.9.171:59437.activeMasterManager] regionserver.HRegion(6162): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, {NAME => 'table', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437 Table name == hbase:meta 2016-08-18 10:05:40,210 INFO [main] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:40,211 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:59441 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:40,214 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:594410x0, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:40,214 DEBUG [main] zookeeper.ZKUtil(365): regionserver:594410x0, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/master 2016-08-18 10:05:40,215 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): regionserver:59441-0x1569e9d55410007 connected 2016-08-18 10:05:40,216 DEBUG [main] zookeeper.ZKUtil(367): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-18 10:05:40,216 INFO [RpcServer.responder] ipc.RpcServer$Responder(958): RpcServer.responder: starting 2016-08-18 10:05:40,216 INFO [RpcServer.listener,port=59441] ipc.RpcServer$Listener(769): RpcServer.listener,port=59441: starting 2016-08-18 10:05:40,216 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=0 queue=0 2016-08-18 10:05:40,216 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=1 queue=0 2016-08-18 10:05:40,217 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=2 queue=0 2016-08-18 10:05:40,217 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=3 queue=0 2016-08-18 10:05:40,217 DEBUG [main] ipc.RpcExecutor(118): B.default Start Handler index=4 queue=0 2016-08-18 10:05:40,217 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=0 queue=0 2016-08-18 10:05:40,218 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=1 queue=1 2016-08-18 10:05:40,218 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=2 queue=0 2016-08-18 10:05:40,219 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=3 queue=1 2016-08-18 10:05:40,220 DEBUG [main] ipc.RpcExecutor(118): Priority Start Handler index=4 queue=0 2016-08-18 10:05:40,220 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=0 queue=0 2016-08-18 10:05:40,220 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=1 queue=0 2016-08-18 10:05:40,220 DEBUG [main] ipc.RpcExecutor(118): Replication Start Handler index=2 queue=0 2016-08-18 10:05:40,223 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 0 2016-08-18 10:05:40,223 INFO [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3d4f6712 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:40,223 INFO [RS:0;10.22.9.171:59441] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x27f7ee30 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:40,224 DEBUG [10.22.9.171:59437.activeMasterManager] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-18 10:05:40,225 DEBUG [M:0;10.22.9.171:59437-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3d4f67120x0, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:40,226 INFO [M:0;10.22.9.171:59437] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-18 10:05:40,226 DEBUG [RS:0;10.22.9.171:59441-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x27f7ee300x0, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:40,226 DEBUG [M:0;10.22.9.171:59437] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-18 10:05:40,226 INFO [RS:0;10.22.9.171:59441] client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null 2016-08-18 10:05:40,226 DEBUG [RS:0;10.22.9.171:59441] client.ConnectionImplementation(466): clusterid came back null, using default default-cluster 2016-08-18 10:05:40,226 DEBUG [M:0;10.22.9.171:59437] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f9d9393, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:05:40,227 DEBUG [M:0;10.22.9.171:59437] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:40,227 DEBUG [M:0;10.22.9.171:59437] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:40,227 DEBUG [RS:0;10.22.9.171:59441] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@243b886, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:05:40,227 DEBUG [M:0;10.22.9.171:59437-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3d4f6712-0x1569e9d55410008 connected 2016-08-18 10:05:40,227 DEBUG [RS:0;10.22.9.171:59441-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x27f7ee30-0x1569e9d55410009 connected 2016-08-18 10:05:40,227 DEBUG [RS:0;10.22.9.171:59441] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:40,228 DEBUG [RS:0;10.22.9.171:59441] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:40,230 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,230 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:40,232 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:05:40,234 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,235 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:40,236 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:05:40,238 DEBUG [10.22.9.171:59437.activeMasterManager] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740 2016-08-18 10:05:40,241 DEBUG [10.22.9.171:59437.activeMasterManager] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-18 10:05:40,246 DEBUG [10.22.9.171:59437.activeMasterManager] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:05:40,246 INFO [10.22.9.171:59437.activeMasterManager] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=2 2016-08-18 10:05:40,246 DEBUG [10.22.9.171:59437.activeMasterManager] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-18 10:05:40,246 DEBUG [10.22.9.171:59437.activeMasterManager] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-18 10:05:40,247 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-18 10:05:40,247 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-18 10:05:40,247 INFO [10.22.9.171:59437.activeMasterManager] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-18 10:05:40,258 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 0 2016-08-18 10:05:40,261 DEBUG [10.22.9.171:59437.activeMasterManager] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2016-08-18 10:05:40,267 INFO [10.22.9.171:59437.activeMasterManager] fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-08-18 10:05:40,269 INFO [10.22.9.171:59437.activeMasterManager] coordination.ZKSplitLogManagerCoordination(599): Found 0 orphan tasks and 0 rescan nodes 2016-08-18 10:05:40,269 DEBUG [10.22.9.171:59437.activeMasterManager] util.FSTableDescriptors(222): Fetching table descriptors from the filesystem. 2016-08-18 10:05:40,275 INFO [10.22.9.171:59437.activeMasterManager] balancer.StochasticLoadBalancer(156): loading config 2016-08-18 10:05:40,276 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/balancer 2016-08-18 10:05:40,276 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/normalizer 2016-08-18 10:05:40,278 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/switch/split 2016-08-18 10:05:40,278 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/switch/merge 2016-08-18 10:05:40,279 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2016-08-18 10:05:40,279 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2016-08-18 10:05:40,280 INFO [10.22.9.171:59437.activeMasterManager] master.HMaster(620): Server active/primary master=10.22.9.171,59437,1471539940144, sessionid=0x1569e9d55410006, setting cluster-up flag (Was=false) 2016-08-18 10:05:40,280 INFO [10.22.9.171:59437.activeMasterManager] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager was loaded successfully. 2016-08-18 10:05:40,281 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(813): ClusterId : cb9fa8a5-abb1-4f07-a744-fca71e86e7e9 2016-08-18 10:05:40,281 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(813): ClusterId : cb9fa8a5-abb1-4f07-a744-fca71e86e7e9 2016-08-18 10:05:40,281 INFO [RS:0;10.22.9.171:59441] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-18 10:05:40,281 INFO [M:0;10.22.9.171:59437] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2016-08-18 10:05:40,281 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-18 10:05:40,282 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(43): Procedure backup-proc is initializing 2016-08-18 10:05:40,283 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc already exists 2016-08-18 10:05:40,284 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc/acquired already exists 2016-08-18 10:05:40,285 INFO [10.22.9.171:59437.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/online-snapshot/acquired /2/online-snapshot/reached /2/online-snapshot/abort 2016-08-18 10:05:40,285 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-18 10:05:40,285 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc/abort already exists 2016-08-18 10:05:40,285 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-18 10:05:40,286 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc is initialized 2016-08-18 10:05:40,286 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2016-08-18 10:05:40,286 DEBUG [RS:0;10.22.9.171:59441] zookeeper.RecoverableZooKeeper(594): Node /2/online-snapshot/acquired already exists 2016-08-18 10:05:40,286 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/online-snapshot/acquired already exists 2016-08-18 10:05:40,287 DEBUG [10.22.9.171:59437.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,287 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-18 10:05:40,287 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-18 10:05:40,287 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2016-08-18 10:05:40,287 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc is initializing 2016-08-18 10:05:40,288 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/rolllog-proc/acquired already exists 2016-08-18 10:05:40,289 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc already exists 2016-08-18 10:05:40,290 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/acquired already exists 2016-08-18 10:05:40,290 INFO [10.22.9.171:59437.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/rolllog-proc/acquired /2/rolllog-proc/reached /2/rolllog-proc/abort 2016-08-18 10:05:40,291 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/reached already exists 2016-08-18 10:05:40,291 DEBUG [10.22.9.171:59437.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,291 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-18 10:05:40,291 DEBUG [M:0;10.22.9.171:59437] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/abort already exists 2016-08-18 10:05:40,292 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc is initialized 2016-08-18 10:05:40,292 INFO [RS:0;10.22.9.171:59441] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-18 10:05:40,292 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/flush-table-proc/acquired already exists 2016-08-18 10:05:40,292 INFO [M:0;10.22.9.171:59437] regionserver.MemStoreFlusher(125): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G 2016-08-18 10:05:40,292 INFO [RS:0;10.22.9.171:59441] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-18 10:05:40,292 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-18 10:05:40,292 INFO [M:0;10.22.9.171:59437] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-08-18 10:05:40,293 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer$CompactionChecker(1555): CompactionChecker runs every 1sec 2016-08-18 10:05:40,293 DEBUG [RS:0;10.22.9.171:59441] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7831a507, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.9.171/10.22.9.171:0 2016-08-18 10:05:40,293 INFO [10.22.9.171:59437.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/flush-table-proc/acquired /2/flush-table-proc/reached /2/flush-table-proc/abort 2016-08-18 10:05:40,293 DEBUG [M:0;10.22.9.171:59437] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@349ac37, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.22.9.171/10.22.9.171:0 2016-08-18 10:05:40,293 DEBUG [M:0;10.22.9.171:59437] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:40,293 DEBUG [RS:0;10.22.9.171:59441] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:40,293 DEBUG [M:0;10.22.9.171:59437] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:40,293 DEBUG [RS:0;10.22.9.171:59441] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:40,294 DEBUG [M:0;10.22.9.171:59437] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:M:0;10.22.9.171:59437 2016-08-18 10:05:40,294 DEBUG [RS:0;10.22.9.171:59441] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;10.22.9.171:59441 2016-08-18 10:05:40,294 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-18 10:05:40,295 DEBUG [10.22.9.171:59437.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(248): Starting the controller for procedure member:10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,295 INFO [10.22.9.171:59437.activeMasterManager] master.MasterCoprocessorHost(91): System coprocessor loading is enabled 2016-08-18 10:05:40,295 INFO [10.22.9.171:59437.activeMasterManager] coprocessor.CoprocessorHost(161): System coprocessor org.apache.hadoop.hbase.backup.master.BackupController was loaded successfully with priority (536870911). 2016-08-18 10:05:40,295 DEBUG [M:0;10.22.9.171:59437] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,295 INFO [M:0;10.22.9.171:59437] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-18 10:05:40,295 DEBUG [10.22.9.171:59437.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-10.22.9.171:59437, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:40,295 INFO [M:0;10.22.9.171:59437] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-18 10:05:40,295 DEBUG [10.22.9.171:59437.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-10.22.9.171:59437, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:40,295 DEBUG [RS:0;10.22.9.171:59441] zookeeper.ZKUtil(365): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:05:40,295 DEBUG [10.22.9.171:59437.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-10.22.9.171:59437, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:40,295 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(2339): reportForDuty to master=10.22.9.171,59437,1471539940144 with port=59437, startcode=1471539940144 2016-08-18 10:05:40,296 DEBUG [10.22.9.171:59437.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-10.22.9.171:59437, corePoolSize=5, maxPoolSize=5 2016-08-18 10:05:40,295 INFO [RS:0;10.22.9.171:59441] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2016-08-18 10:05:40,296 INFO [RS:0;10.22.9.171:59441] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2016-08-18 10:05:40,296 DEBUG [10.22.9.171:59437.activeMasterManager] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-10.22.9.171:59437, corePoolSize=10, maxPoolSize=10 2016-08-18 10:05:40,296 DEBUG [M:0;10.22.9.171:59437] regionserver.HRegionServer(2358): Master is not running yet 2016-08-18 10:05:40,296 DEBUG [10.22.9.171:59437.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-10.22.9.171:59437, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:40,296 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(2339): reportForDuty to master=10.22.9.171,59437,1471539940144 with port=59441, startcode=1471539940207 2016-08-18 10:05:40,296 WARN [M:0;10.22.9.171:59437] regionserver.HRegionServer(941): reportForDuty failed; sleeping and then retrying. 2016-08-18 10:05:40,296 INFO [10.22.9.171:59437.activeMasterManager] procedure2.ProcedureExecutor(487): Starting procedure executor threads=9 2016-08-18 10:05:40,296 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,296 INFO [10.22.9.171:59437.activeMasterManager] wal.WALProcedureStore(296): Starting WAL Procedure Store lease recovery 2016-08-18 10:05:40,297 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:05:40,298 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-08-18 10:05:40,298 DEBUG [RpcServer.listener,port=59437] ipc.RpcServer$Listener(880): RpcServer.listener,port=59437: connection from 10.22.9.171:59447; # active connections: 1 2016-08-18 10:05:40,298 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /2/rs/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,298 WARN [10.22.9.171:59437.activeMasterManager] wal.WALProcedureStore(941): Log directory not found: File hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/MasterProcWALs does not exist. 2016-08-18 10:05:40,299 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59437] ipc.RpcServer$Connection(1710): Auth successful for tyu.hfs.1 (auth:SIMPLE) 2016-08-18 10:05:40,299 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /2/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:05:40,299 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59437] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59447 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:05:40,300 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59437] ipc.CallRunner(112): B.defaultRpcServer.handler=0,queue=0,port=59437: callId: 0 service: RegionServerStatusService methodName: RegionServerStartup size: 45 connection: 10.22.9.171:59447 org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2295) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8615) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:05:40,301 DEBUG [10.22.9.171:59437.activeMasterManager] wal.WALProcedureStore(833): Roll new state log: 1 2016-08-18 10:05:40,302 INFO [10.22.9.171:59437.activeMasterManager] wal.WALProcedureStore(319): Lease acquired for flushLogId: 1 2016-08-18 10:05:40,302 DEBUG [10.22.9.171:59437.activeMasterManager] wal.WALProcedureStore(336): No state logs to replay. 2016-08-18 10:05:40,302 DEBUG [10.22.9.171:59437.activeMasterManager] procedure2.ProcedureExecutor$1(298): load procedures maxProcId=0 2016-08-18 10:05:40,302 DEBUG [10.22.9.171:59437.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2016-08-18 10:05:40,302 DEBUG [10.22.9.171:59437.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2016-08-18 10:05:40,303 INFO [10.22.9.171:59437.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:40,305 DEBUG [10.22.9.171:59437.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner0x0, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:40,306 DEBUG [10.22.9.171:59437.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(674): replicationLogCleaner-0x1569e9d5541000a connected 2016-08-18 10:05:40,306 DEBUG [10.22.9.171:59437.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2016-08-18 10:05:40,307 DEBUG [10.22.9.171:59437.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2016-08-18 10:05:40,307 DEBUG [10.22.9.171:59437.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2016-08-18 10:05:40,307 DEBUG [10.22.9.171:59437.activeMasterManager] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2016-08-18 10:05:40,307 INFO [10.22.9.171:59437.activeMasterManager] master.ServerManager(1008): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of 1500 ms. 2016-08-18 10:05:40,307 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(2339): reportForDuty to master=10.22.9.171,59437,1471539940144 with port=59437, startcode=1471539940144 2016-08-18 10:05:40,308 INFO [M:0;10.22.9.171:59437] master.ServerManager(456): Registering server=10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,308 DEBUG [RS:0;10.22.9.171:59441] regionserver.HRegionServer(2358): Master is not running yet 2016-08-18 10:05:40,308 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437 2016-08-18 10:05:40,308 WARN [RS:0;10.22.9.171:59441] regionserver.HRegionServer(941): reportForDuty failed; sleeping and then retrying. 2016-08-18 10:05:40,308 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:59425 2016-08-18 10:05:40,308 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-18 10:05:40,308 WARN [M:0;10.22.9.171:59437] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-18 10:05:40,308 INFO [M:0;10.22.9.171:59437] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,308 DEBUG [M:0;10.22.9.171:59437] regionserver.HRegionServer(1654): logdir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,312 DEBUG [M:0;10.22.9.171:59437] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-18 10:05:40,312 INFO [M:0;10.22.9.171:59437] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-18 10:05:40,312 INFO [M:0;10.22.9.171:59437] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-18 10:05:40,312 INFO [M:0;10.22.9.171:59437] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-18 10:05:40,314 DEBUG [M:0;10.22.9.171:59437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.9.171:59437, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:40,314 DEBUG [M:0;10.22.9.171:59437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.9.171:59437, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:40,315 DEBUG [M:0;10.22.9.171:59437] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.9.171:59437, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:40,315 DEBUG [M:0;10.22.9.171:59437] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.9.171:59437, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:40,315 DEBUG [M:0;10.22.9.171:59437] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.9.171:59437, corePoolSize=2, maxPoolSize=2 2016-08-18 10:05:40,315 DEBUG [M:0;10.22.9.171:59437] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59437, corePoolSize=10, maxPoolSize=10 2016-08-18 10:05:40,315 DEBUG [M:0;10.22.9.171:59437] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.9.171:59437, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:40,317 DEBUG [M:0;10.22.9.171:59437] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,317 DEBUG [M:0;10.22.9.171:59437] zookeeper.ZKUtil(365): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:05:40,317 INFO [M:0;10.22.9.171:59437] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.9.171,59437,1471539940144] other RSs: [10.22.9.171,59437,1471539940144, 10.22.9.171,59441,1471539940207] 2016-08-18 10:05:40,359 INFO [10.22.9.171:59437.activeMasterManager] master.ServerManager(1025): Finished waiting for region servers count to settle; checked in 1, slept for 52 ms, expecting minimum of 1, maximum of 1, master is running 2016-08-18 10:05:40,360 INFO [10.22.9.171:59437.activeMasterManager] master.ServerManager(456): Registering server=10.22.9.171,59441,1471539940207 2016-08-18 10:05:40,360 INFO [10.22.9.171:59437.activeMasterManager] master.HMaster(710): Registered server found up in zk but who has not yet reported in: 10.22.9.171,59441,1471539940207 2016-08-18 10:05:40,360 INFO [M:0;10.22.9.171:59437] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-18 10:05:40,360 INFO [SplitLogWorker-10.22.9.171:59437] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.9.171,59437,1471539940144 starting 2016-08-18 10:05:40,360 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1412): Serving as 10.22.9.171,59437,1471539940144, RpcServer on 10.22.9.171/10.22.9.171:59437, sessionid=0x1569e9d55410006 2016-08-18 10:05:40,360 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-18 10:05:40,360 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59437,1471539940144' 2016-08-18 10:05:40,361 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/rolllog-proc/abort' 2016-08-18 10:05:40,361 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/rolllog-proc/acquired' 2016-08-18 10:05:40,361 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(624): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Unable to get data of znode /2/meta-region-server because node does not exist (not an error) 2016-08-18 10:05:40,362 INFO [M:0;10.22.9.171:59437] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-18 10:05:40,362 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-18 10:05:40,362 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-18 10:05:40,362 DEBUG [M:0;10.22.9.171:59437] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,362 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59437,1471539940144' 2016-08-18 10:05:40,362 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2016-08-18 10:05:40,363 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2016-08-18 10:05:40,363 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(624): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Unable to get data of znode /2/meta-region-server because node does not exist (not an error) 2016-08-18 10:05:40,363 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-18 10:05:40,363 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-18 10:05:40,363 DEBUG [M:0;10.22.9.171:59437] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,364 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59437,1471539940144' 2016-08-18 10:05:40,364 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/flush-table-proc/abort' 2016-08-18 10:05:40,363 INFO [10.22.9.171:59437.activeMasterManager] master.HMaster(938): Re-assigning hbase:meta with replicaId, 0 it was on null 2016-08-18 10:05:40,364 DEBUG [10.22.9.171:59437.activeMasterManager] master.AssignmentManager(1291): No previous transition plan found (or ignoring an existing plan) for hbase:meta,,1.1588230740; generated random plan=hri=hbase:meta,,1.1588230740, src=, dest=10.22.9.171,59437,1471539940144; 2 (online=2) available servers, forceNewPlan=false 2016-08-18 10:05:40,364 INFO [10.22.9.171:59437.activeMasterManager] master.AssignmentManager(1080): Assigning hbase:meta,,1.1588230740 to 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,364 DEBUG [M:0;10.22.9.171:59437] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/flush-table-proc/acquired' 2016-08-18 10:05:40,364 INFO [10.22.9.171:59437.activeMasterManager] master.RegionStates(1106): Transition {1588230740 state=OFFLINE, ts=1471539940364, server=null} to {1588230740 state=PENDING_OPEN, ts=1471539940364, server=10.22.9.171,59437,1471539940144} 2016-08-18 10:05:40,364 INFO [10.22.9.171:59437.activeMasterManager] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,365 DEBUG [M:0;10.22.9.171:59437] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-18 10:05:40,365 INFO [M:0;10.22.9.171:59437] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-18 10:05:40,365 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.MetaTableLocator(451): META region location doesn't exist, create it 2016-08-18 10:05:40,366 DEBUG [10.22.9.171:59437.activeMasterManager] master.ServerManager(934): New admin connection to 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,366 INFO [10.22.9.171:59437.activeMasterManager] regionserver.RSRpcServices(1666): Open hbase:meta,,1.1588230740 2016-08-18 10:05:40,367 INFO [RS_OPEN_META-10.22.9.171:59437-0] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-18 10:05:40,367 DEBUG [10.22.9.171:59437.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539940367,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-18 10:05:40,367 INFO [RS_OPEN_META-10.22.9.171:59437-0] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-18 10:05:40,372 INFO [RS_OPEN_META-10.22.9.171:59437-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0, suffix=, logDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta, archiveDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:05:40,376 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] wal.FSHLog(665): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:40,381 INFO [RS_OPEN_META-10.22.9.171:59437-0] wal.FSHLog(1436): Slow sync cost: 4 ms, current pipeline: [] 2016-08-18 10:05:40,381 INFO [RS_OPEN_META-10.22.9.171:59437-0] wal.FSHLog(890): New WAL /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:40,382 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] regionserver.HRegion(6339): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:05:40,382 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] coprocessor.CoprocessorHost(181): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2016-08-18 10:05:40,382 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] regionserver.HRegion(7445): Registered coprocessor service: region=hbase:meta,,1 service=hbase.pb.MultiRowMutationService 2016-08-18 10:05:40,382 INFO [RS_OPEN_META-10.22.9.171:59437-0] regionserver.RegionCoprocessorHost(376): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2016-08-18 10:05:40,383 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table meta 1588230740 2016-08-18 10:05:40,383 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] regionserver.HRegion(736): Instantiated hbase:meta,,1.1588230740 2016-08-18 10:05:40,386 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,387 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:40,388 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:05:40,389 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:40,390 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:40,392 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:05:40,395 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740 2016-08-18 10:05:40,398 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (67108864) instead 2016-08-18 10:05:40,403 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/recovered.edits/3.seqid to file, newSeqId=3, maxSeqId=2 2016-08-18 10:05:40,404 INFO [RS_OPEN_META-10.22.9.171:59437-0] regionserver.HRegion(871): Onlined 1588230740; next sequenceid=3 2016-08-18 10:05:40,404 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:40,405 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:meta,,1.1588230740 2016-08-18 10:05:40,405 DEBUG [PostOpenDeployTasks:1588230740] master.AssignmentManager(2884): Got transition OPENED for {1588230740 state=PENDING_OPEN, ts=1471539940364, server=10.22.9.171,59437,1471539940144} from 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,405 INFO [PostOpenDeployTasks:1588230740] master.RegionStates(1106): Transition {1588230740 state=PENDING_OPEN, ts=1471539940364, server=10.22.9.171,59437,1471539940144} to {1588230740 state=OPEN, ts=1471539940405, server=10.22.9.171,59437,1471539940144} 2016-08-18 10:05:40,405 INFO [PostOpenDeployTasks:1588230740] zookeeper.MetaTableLocator(439): Setting hbase:meta region location in ZooKeeper as 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,409 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/meta-region-server 2016-08-18 10:05:40,409 DEBUG [PostOpenDeployTasks:1588230740] master.RegionStates(452): Onlined 1588230740 on 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,410 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:meta,,1.1588230740 2016-08-18 10:05:40,410 DEBUG [RS_OPEN_META-10.22.9.171:59437-0] handler.OpenRegionHandler(126): Opened hbase:meta,,1.1588230740 on 10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,445 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:593960000000000 2016-08-18 10:05:40,445 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 702msec: CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=FINISHED 2016-08-18 10:05:40,576 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:40,579 INFO [10.22.9.171:59437.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-18 10:05:40,580 DEBUG [10.22.9.171:59437.activeMasterManager] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539940580,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:meta"} 2016-08-18 10:05:40,581 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:40,583 INFO [10.22.9.171:59437.activeMasterManager] hbase.MetaTableAccessor(1700): Updated table hbase:meta state to ENABLED in META 2016-08-18 10:05:40,585 DEBUG [10.22.9.171:59437.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(server crash processing) 2016-08-18 10:05:40,585 INFO [10.22.9.171:59437.activeMasterManager] master.ServerManager(683): AssignmentManager hasn't finished failover cleanup; waiting 2016-08-18 10:05:40,586 INFO [10.22.9.171:59437.activeMasterManager] master.HMaster(965): hbase:meta with replicaId 0 assigned=1, location=10.22.9.171,59437,1471539940144 2016-08-18 10:05:40,593 INFO [10.22.9.171:59437.activeMasterManager] master.AssignmentManager(555): Clean cluster startup. Don't reassign user regions 2016-08-18 10:05:40,598 INFO [10.22.9.171:59437.activeMasterManager] master.AssignmentManager(425): Joined the cluster in 11ms, failover=false 2016-08-18 10:05:40,600 DEBUG [10.22.9.171:59437.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:05:40,601 DEBUG [10.22.9.171:59437.activeMasterManager] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:05:40,601 INFO [10.22.9.171:59437.activeMasterManager] master.TableNamespaceManager(93): Namespace table not found. Creating... 2016-08-18 10:05:40,715 DEBUG [10.22.9.171:59437.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:05:40,719 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /2/table-lock/hbase:namespace/write-master:594370000000000 2016-08-18 10:05:40,843 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741831_1007{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 315 2016-08-18 10:05:41,257 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2016-08-18 10:05:41,259 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6162): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/.tmp Table name == hbase:namespace 2016-08-18 10:05:41,271 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 0 2016-08-18 10:05:41,272 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(736): Instantiated hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:05:41,272 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1419): Closing hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748.: disabling compactions & flushes 2016-08-18 10:05:41,272 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:05:41,272 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1552): Closed hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:05:41,308 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(2339): reportForDuty to master=10.22.9.171,59437,1471539940144 with port=59441, startcode=1471539940207 2016-08-18 10:05:41,311 INFO [B.defaultRpcServer.handler=1,queue=0,port=59437] master.ServerManager(456): Registering server=10.22.9.171,59441,1471539940207 2016-08-18 10:05:41,312 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1390): Config from master: hbase.rootdir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437 2016-08-18 10:05:41,312 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1390): Config from master: fs.defaultFS=hdfs://localhost:59425 2016-08-18 10:05:41,312 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1390): Config from master: hbase.master.info.port=-1 2016-08-18 10:05:41,312 WARN [RS:0;10.22.9.171:59441] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-08-18 10:05:41,313 INFO [RS:0;10.22.9.171:59441] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:41,313 DEBUG [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1654): logdir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207 2016-08-18 10:05:41,321 DEBUG [RS:0;10.22.9.171:59441] regionserver.Replication(151): ReplicationStatisticsThread 300 2016-08-18 10:05:41,321 INFO [RS:0;10.22.9.171:59441] wal.WALFactory(144): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider 2016-08-18 10:05:41,322 INFO [RS:0;10.22.9.171:59441] wal.RegionGroupingProvider(106): Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.BoundedGroupingStrategy 2016-08-18 10:05:41,322 INFO [RS:0;10.22.9.171:59441] regionserver.MetricsRegionServerWrapperImpl(139): Computing regionserver metrics every 5000 milliseconds 2016-08-18 10:05:41,324 DEBUG [RS:0;10.22.9.171:59441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.22.9.171:59441, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:41,324 DEBUG [RS:0;10.22.9.171:59441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.22.9.171:59441, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:41,324 DEBUG [RS:0;10.22.9.171:59441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.22.9.171:59441, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:41,325 DEBUG [RS:0;10.22.9.171:59441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.22.9.171:59441, corePoolSize=1, maxPoolSize=1 2016-08-18 10:05:41,325 DEBUG [RS:0;10.22.9.171:59441] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.22.9.171:59441, corePoolSize=2, maxPoolSize=2 2016-08-18 10:05:41,325 DEBUG [RS:0;10.22.9.171:59441] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59441, corePoolSize=10, maxPoolSize=10 2016-08-18 10:05:41,325 DEBUG [RS:0;10.22.9.171:59441] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.22.9.171:59441, corePoolSize=3, maxPoolSize=3 2016-08-18 10:05:41,327 DEBUG [RS:0;10.22.9.171:59441] zookeeper.ZKUtil(365): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,328 DEBUG [RS:0;10.22.9.171:59441] zookeeper.ZKUtil(365): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Set watcher on existing znode=/2/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:05:41,328 INFO [RS:0;10.22.9.171:59441] regionserver.ReplicationSourceManager(246): Current list of replicators: [10.22.9.171,59437,1471539940144, 10.22.9.171,59441,1471539940207] other RSs: [10.22.9.171,59437,1471539940144, 10.22.9.171,59441,1471539940207] 2016-08-18 10:05:41,366 INFO [RS:0;10.22.9.171:59441] regionserver.HeapMemoryManager(191): Starting HeapMemoryTuner chore. 2016-08-18 10:05:41,366 INFO [SplitLogWorker-10.22.9.171:59441] regionserver.SplitLogWorker(134): SplitLogWorker 10.22.9.171,59441,1471539940207 starting 2016-08-18 10:05:41,366 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1412): Serving as 10.22.9.171,59441,1471539940207, RpcServer on 10.22.9.171/10.22.9.171:59441, sessionid=0x1569e9d55410007 2016-08-18 10:05:41,367 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(51): Procedure backup-proc is starting 2016-08-18 10:05:41,367 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59441,1471539940207' 2016-08-18 10:05:41,367 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/rolllog-proc/abort' 2016-08-18 10:05:41,367 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/rolllog-proc/acquired' 2016-08-18 10:05:41,368 INFO [RS:0;10.22.9.171:59441] regionserver.LogRollRegionServerProcedureManager(85): Started region server backup manager. 2016-08-18 10:05:41,368 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc is started 2016-08-18 10:05:41,368 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2016-08-18 10:05:41,368 DEBUG [RS:0;10.22.9.171:59441] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.22.9.171,59441,1471539940207 2016-08-18 10:05:41,368 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59441,1471539940207' 2016-08-18 10:05:41,368 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2016-08-18 10:05:41,369 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2016-08-18 10:05:41,369 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2016-08-18 10:05:41,369 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc is starting 2016-08-18 10:05:41,370 DEBUG [RS:0;10.22.9.171:59441] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.22.9.171,59441,1471539940207 2016-08-18 10:05:41,370 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(356): Starting procedure member '10.22.9.171,59441,1471539940207' 2016-08-18 10:05:41,370 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/2/flush-table-proc/abort' 2016-08-18 10:05:41,370 DEBUG [RS:0;10.22.9.171:59441] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/2/flush-table-proc/acquired' 2016-08-18 10:05:41,371 DEBUG [RS:0;10.22.9.171:59441] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc is started 2016-08-18 10:05:41,371 INFO [RS:0;10.22.9.171:59441] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-08-18 10:05:41,371 INFO [M:0;10.22.9.171:59437] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59437%2C1471539940144.regiongroup-0, suffix=, logDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144, archiveDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:05:41,374 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(665): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-0.1471539941371 2016-08-18 10:05:41,378 INFO [M:0;10.22.9.171:59437] wal.FSHLog(1436): Slow sync cost: 4 ms, current pipeline: [] 2016-08-18 10:05:41,379 INFO [M:0;10.22.9.171:59437] wal.FSHLog(890): New WAL /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-0.1471539941371 2016-08-18 10:05:41,382 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":41}]},"row":"hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748."} 2016-08-18 10:05:41,384 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:41,386 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:05:41,439 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2016-08-18 10:05:41,489 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,490 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table hbase:namespace state org.apache.hadoop.hbase.TableNotFoundException: hbase:namespace at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:05:41,491 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {880bec924ffe1f47e306a99e52984748 state=OFFLINE, ts=1471539941489, server=null} to {880bec924ffe1f47e306a99e52984748 state=PENDING_OPEN, ts=1471539941491, server=10.22.9.171,59437,1471539940144} 2016-08-18 10:05:41,491 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. with state=PENDING_OPEN, sn=10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,493 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:41,494 INFO [ProcedureExecutor-0] regionserver.RSRpcServices(1666): Open hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:05:41,499 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,500 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539941500,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:namespace"} 2016-08-18 10:05:41,501 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:41,502 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table hbase:namespace state to ENABLED in META 2016-08-18 10:05:41,503 INFO [RS_OPEN_REGION-10.22.9.171:59437-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59437%2C1471539940144.regiongroup-1, suffix=, logDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144, archiveDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:05:41,506 DEBUG [RS_OPEN_REGION-10.22.9.171:59437-0] wal.FSHLog(665): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:05:41,510 INFO [RS_OPEN_REGION-10.22.9.171:59437-0] wal.FSHLog(1436): Slow sync cost: 4 ms, current pipeline: [] 2016-08-18 10:05:41,511 INFO [RS_OPEN_REGION-10.22.9.171:59437-0] wal.FSHLog(890): New WAL /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:05:41,512 DEBUG [RS_OPEN_REGION-10.22.9.171:59437-0] regionserver.HRegion(6339): Opening region: {ENCODED => 880bec924ffe1f47e306a99e52984748, NAME => 'hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:05:41,513 DEBUG [RS_OPEN_REGION-10.22.9.171:59437-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table namespace 880bec924ffe1f47e306a99e52984748 2016-08-18 10:05:41,513 DEBUG [RS_OPEN_REGION-10.22.9.171:59437-0] regionserver.HRegion(736): Instantiated hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:05:41,518 INFO [StoreOpener-880bec924ffe1f47e306a99e52984748-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:41,518 INFO [StoreOpener-880bec924ffe1f47e306a99e52984748-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:41,519 DEBUG [StoreOpener-880bec924ffe1f47e306a99e52984748-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info 2016-08-18 10:05:41,520 DEBUG [RS_OPEN_REGION-10.22.9.171:59437-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748 2016-08-18 10:05:41,527 DEBUG [RS_OPEN_REGION-10.22.9.171:59437-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:05:41,527 INFO [RS_OPEN_REGION-10.22.9.171:59437-0] regionserver.HRegion(871): Onlined 880bec924ffe1f47e306a99e52984748; next sequenceid=2 2016-08-18 10:05:41,528 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:05:41,529 INFO [PostOpenDeployTasks:880bec924ffe1f47e306a99e52984748] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:05:41,529 DEBUG [PostOpenDeployTasks:880bec924ffe1f47e306a99e52984748] master.AssignmentManager(2884): Got transition OPENED for {880bec924ffe1f47e306a99e52984748 state=PENDING_OPEN, ts=1471539941491, server=10.22.9.171,59437,1471539940144} from 10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,529 INFO [PostOpenDeployTasks:880bec924ffe1f47e306a99e52984748] master.RegionStates(1106): Transition {880bec924ffe1f47e306a99e52984748 state=PENDING_OPEN, ts=1471539941491, server=10.22.9.171,59437,1471539940144} to {880bec924ffe1f47e306a99e52984748 state=OPEN, ts=1471539941529, server=10.22.9.171,59437,1471539940144} 2016-08-18 10:05:41,529 INFO [PostOpenDeployTasks:880bec924ffe1f47e306a99e52984748] master.RegionStateStore(207): Updating hbase:meta row hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. with state=OPEN, openSeqNum=2, server=10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,530 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:41,531 DEBUG [PostOpenDeployTasks:880bec924ffe1f47e306a99e52984748] master.RegionStates(452): Onlined 880bec924ffe1f47e306a99e52984748 on 10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,532 DEBUG [PostOpenDeployTasks:880bec924ffe1f47e306a99e52984748] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:05:41,532 DEBUG [RS_OPEN_REGION-10.22.9.171:59437-0] handler.OpenRegionHandler(126): Opened hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. on 10.22.9.171,59437,1471539940144 2016-08-18 10:05:41,638 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/namespace 2016-08-18 10:05:41,639 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/namespace 2016-08-18 10:05:41,712 DEBUG [10.22.9.171:59437.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:41,819 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /2/table-lock/hbase:namespace/write-master:594370000000000 2016-08-18 10:05:41,819 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 1.1100sec: CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu state=FINISHED 2016-08-18 10:05:42,034 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:05:42,143 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/namespace 2016-08-18 10:05:42,145 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:42,362 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 610msec: CreateNamespaceProcedure (Namespace=default) id=2 owner=tyu state=FINISHED 2016-08-18 10:05:42,383 INFO [RS:0;10.22.9.171:59441] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59441%2C1471539940207.regiongroup-0, suffix=, logDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207, archiveDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:05:42,386 DEBUG [RS:0;10.22.9.171:59441] wal.FSHLog(665): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-0.1471539942383 2016-08-18 10:05:42,396 INFO [RS:0;10.22.9.171:59441] wal.FSHLog(1436): Slow sync cost: 9 ms, current pipeline: [] 2016-08-18 10:05:42,396 INFO [RS:0;10.22.9.171:59441] wal.FSHLog(890): New WAL /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-0.1471539942383 2016-08-18 10:05:42,476 DEBUG [10.22.9.171:59437.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:42,696 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:05:42,808 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/namespace 2016-08-18 10:05:42,811 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:42,811 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-18 10:05:43,027 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 548msec: CreateNamespaceProcedure (Namespace=hbase) id=3 owner=tyu state=FINISHED 2016-08-18 10:05:43,035 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/namespace/default already exists 2016-08-18 10:05:43,036 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/namespace/default 2016-08-18 10:05:43,037 DEBUG [10.22.9.171:59437.activeMasterManager] zookeeper.RecoverableZooKeeper(594): Node /2/namespace/hbase already exists 2016-08-18 10:05:43,038 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/namespace/hbase 2016-08-18 10:05:43,038 INFO [10.22.9.171:59437.activeMasterManager] master.HMaster(807): Master has completed initialization 2016-08-18 10:05:43,039 DEBUG [10.22.9.171:59437.activeMasterManager] procedure.MasterProcedureScheduler(387): Wake event ProcedureEvent(master initialized) 2016-08-18 10:05:43,039 INFO [10.22.9.171:59437.activeMasterManager] quotas.MasterQuotaManager(72): Quota support disabled 2016-08-18 10:05:43,039 INFO [10.22.9.171:59437.activeMasterManager] zookeeper.ZooKeeperWatcher(225): not a secure deployment, proceeding 2016-08-18 10:05:43,346 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xcf72664 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:05:43,350 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xcf726640x0, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:05:43,351 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2760f8d2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:05:43,352 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:05:43,352 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:05:43,352 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xcf72664-0x1569e9d5541000b connected 2016-08-18 10:05:43,356 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:05:43,356 DEBUG [RpcServer.listener,port=59437] ipc.RpcServer$Listener(880): RpcServer.listener,port=59437: connection from 10.22.9.171:59458; # active connections: 2 2016-08-18 10:05:43,357 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59437] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:05:43,357 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59437] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59458 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:05:43,362 INFO [main] hbase.HBaseTestingUtility(1089): Minicluster is up 2016-08-18 10:05:43,362 INFO [main] hbase.HBaseTestingUtility(1263): The hbase.fs.tmp.dir is set to /user/tyu/hbase-staging 2016-08-18 10:05:43,362 INFO [main] hbase.HBaseTestingUtility(2441): Starting mini mapreduce cluster... 2016-08-18 10:05:43,362 INFO [main] hbase.HBaseTestingUtility(743): Setting test.cache.data to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/cache_data in system properties and HBase conf 2016-08-18 10:05:43,362 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.tmp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_tmp in system properties and HBase conf 2016-08-18 10:05:43,363 INFO [main] hbase.HBaseTestingUtility(743): Setting hadoop.log.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/hadoop_logs in system properties and HBase conf 2016-08-18 10:05:43,363 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.local.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/mapred_local in system properties and HBase conf 2016-08-18 10:05:43,363 INFO [main] hbase.HBaseTestingUtility(743): Setting mapreduce.cluster.temp.dir to /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/mapred_temp in system properties and HBase conf 2016-08-18 10:05:43,363 INFO [main] hbase.HBaseTestingUtility(734): read short circuit is OFF 2016-08-18 10:05:43,364 INFO [10.22.9.171:59437.activeMasterManager] master.HMaster(1495): Client=null/null create 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:05:43,395 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:05:43,428 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1558590 2016-08-18 10:05:43,472 DEBUG [10.22.9.171:59437.activeMasterManager] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:05:43,476 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /2/table-lock/hbase:backup/write-master:594370000000000 2016-08-18 10:05:43,477 INFO [10.22.9.171:59437.activeMasterManager] master.BackupController(51): Created hbase:backup table 2016-08-18 10:05:43,595 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741836_1012{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 535 2016-08-18 10:05:44,000 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/.tmp/data/hbase/backup/.tabledesc/.tableinfo.0000000001 2016-08-18 10:05:44,002 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(6162): creating HRegion hbase:backup HTD == 'hbase:backup', {NAME => 'meta', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'session', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/.tmp Table name == hbase:backup 2016-08-18 10:05:44,012 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 0 2016-08-18 10:05:44,014 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(736): Instantiated hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:05:44,015 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1419): Closing hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020.: disabling compactions & flushes 2016-08-18 10:05:44,015 DEBUG [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:05:44,015 INFO [RegionOpenAndInitThread-hbase:backup-1] regionserver.HRegion(1552): Closed hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:05:44,128 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":38}]},"row":"hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020."} 2016-08-18 10:05:44,129 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:44,130 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:05:44,235 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,236 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table hbase:backup state org.apache.hadoop.hbase.TableNotFoundException: hbase:backup at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:05:44,236 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {f83c1e5a1081010f5215d68f80335020 state=OFFLINE, ts=1471539944235, server=null} to {f83c1e5a1081010f5215d68f80335020 state=PENDING_OPEN, ts=1471539944236, server=10.22.9.171,59441,1471539940207} 2016-08-18 10:05:44,236 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. with state=PENDING_OPEN, sn=10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,237 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:44,238 DEBUG [ProcedureExecutor-3] master.ServerManager(934): New admin connection to 10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,240 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service AdminService, sasl=false 2016-08-18 10:05:44,240 DEBUG [RpcServer.listener,port=59441] ipc.RpcServer$Listener(880): RpcServer.listener,port=59441: connection from 10.22.9.171:59464; # active connections: 1 2016-08-18 10:05:44,242 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59441] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:05:44,242 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59441] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59464 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:05:44,242 INFO [PriorityRpcServer.handler=1,queue=1,port=59441] regionserver.RSRpcServices(1666): Open hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:05:44,248 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,248 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539944248,"tag":[],"qualifier":"state","vlen":2}]},"row":"hbase:backup"} 2016-08-18 10:05:44,249 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:44,250 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table hbase:backup state to ENABLED in META 2016-08-18 10:05:44,252 INFO [RS_OPEN_REGION-10.22.9.171:59441-0] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59441%2C1471539940207.regiongroup-1, suffix=, logDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207, archiveDir=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:05:44,255 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] wal.FSHLog(665): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-1.1471539944252 2016-08-18 10:05:44,259 INFO [RS_OPEN_REGION-10.22.9.171:59441-0] wal.FSHLog(1436): Slow sync cost: 4 ms, current pipeline: [] 2016-08-18 10:05:44,262 INFO [RS_OPEN_REGION-10.22.9.171:59441-0] wal.FSHLog(890): New WAL /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-1.1471539944252 2016-08-18 10:05:44,264 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] regionserver.HRegion(6339): Opening region: {ENCODED => f83c1e5a1081010f5215d68f80335020, NAME => 'hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:05:44,265 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table backup f83c1e5a1081010f5215d68f80335020 2016-08-18 10:05:44,265 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] regionserver.HRegion(736): Instantiated hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:05:44,269 INFO [StoreOpener-f83c1e5a1081010f5215d68f80335020-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:44,269 INFO [StoreOpener-f83c1e5a1081010f5215d68f80335020-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:44,271 DEBUG [StoreOpener-f83c1e5a1081010f5215d68f80335020-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/meta 2016-08-18 10:05:44,273 INFO [StoreOpener-f83c1e5a1081010f5215d68f80335020-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:05:44,273 INFO [StoreOpener-f83c1e5a1081010f5215d68f80335020-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:05:44,274 DEBUG [StoreOpener-f83c1e5a1081010f5215d68f80335020-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/session 2016-08-18 10:05:44,275 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020 2016-08-18 10:05:44,278 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] regionserver.FlushLargeStoresPolicy(72): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:backup, use config (67108864) instead 2016-08-18 10:05:44,282 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:05:44,282 INFO [RS_OPEN_REGION-10.22.9.171:59441-0] regionserver.HRegion(871): Onlined f83c1e5a1081010f5215d68f80335020; next sequenceid=2 2016-08-18 10:05:44,283 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-1.1471539944252 2016-08-18 10:05:44,283 INFO [PostOpenDeployTasks:f83c1e5a1081010f5215d68f80335020] regionserver.HRegionServer(1952): Post open deploy tasks for hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:05:44,284 DEBUG [PriorityRpcServer.handler=3,queue=1,port=59437] master.AssignmentManager(2884): Got transition OPENED for {f83c1e5a1081010f5215d68f80335020 state=PENDING_OPEN, ts=1471539944236, server=10.22.9.171,59441,1471539940207} from 10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,284 INFO [PriorityRpcServer.handler=3,queue=1,port=59437] master.RegionStates(1106): Transition {f83c1e5a1081010f5215d68f80335020 state=PENDING_OPEN, ts=1471539944236, server=10.22.9.171,59441,1471539940207} to {f83c1e5a1081010f5215d68f80335020 state=OPEN, ts=1471539944284, server=10.22.9.171,59441,1471539940207} 2016-08-18 10:05:44,285 INFO [PriorityRpcServer.handler=3,queue=1,port=59437] master.RegionStateStore(207): Updating hbase:meta row hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. with state=OPEN, openSeqNum=2, server=10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,285 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:05:44,286 DEBUG [PriorityRpcServer.handler=3,queue=1,port=59437] master.RegionStates(452): Onlined f83c1e5a1081010f5215d68f80335020 on 10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,287 DEBUG [PostOpenDeployTasks:f83c1e5a1081010f5215d68f80335020] regionserver.HRegionServer(1979): Finished post open deploy task for hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:05:44,288 DEBUG [RS_OPEN_REGION-10.22.9.171:59441-0] handler.OpenRegionHandler(126): Opened hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. on 10.22.9.171,59441,1471539940207 2016-08-18 10:05:44,323 WARN [main] containermanager.AuxServices(130): The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config. 2016-08-18 10:05:44,508 WARN [main] containermanager.AuxServices(130): The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config. 2016-08-18 10:05:44,573 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /2/table-lock/hbase:backup/write-master:594370000000000 2016-08-18 10:05:44,573 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 1.1000sec: CreateTableProcedure (table=hbase:backup) id=4 owner=tyu state=FINISHED 2016-08-18 10:05:47,022 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-nodemanager.properties,hadoop-metrics2.properties 2016-08-18 10:05:54,826 INFO [Thread-446] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:54,829 INFO [Thread-446] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.3/hadoop-yarn-common-2.7.3.jar!/webapps/jobhistory to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_59474_jobhistory____6ryy6q/webapp Aug 18, 2016 10:05:54 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices as a root resource class Aug 18, 2016 10:05:54 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.hs.webapp.JAXBContextResolver as a provider class Aug 18, 2016 10:05:54 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 18, 2016 10:05:54 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Aug 18, 2016 10:05:55 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.hs.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:55 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:55 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices to GuiceManagedComponentProvider with the scope "PerRequest" 2016-08-18 10:05:55,526 INFO [Thread-446] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:59474 Aug 18, 2016 10:05:56 AM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2016-08-18 10:05:56,265 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:56,269 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.3/hadoop-yarn-common-2.7.3.jar!/webapps/cluster to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_59479_cluster____ezwcdy/webapp Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 18, 2016 10:05:56 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2016-08-18 10:05:56,629 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:59479 Aug 18, 2016 10:05:56 AM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2016-08-18 10:05:56,719 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:56,721 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.3/hadoop-yarn-common-2.7.3.jar!/webapps/node to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_59484_node____.f5ppiy/webapp Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Aug 18, 2016 10:05:56 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2016-08-18 10:05:56,894 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:59484 Aug 18, 2016 10:05:56 AM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2016-08-18 10:05:56,939 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-08-18 10:05:56,942 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.3/hadoop-yarn-common-2.7.3.jar!/webapps/node to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_tyus.macbook.pro_local_59488_node____7c2b9m/webapp Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Aug 18, 2016 10:05:56 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Aug 18, 2016 10:05:56 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:57 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 18, 2016 10:05:57 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2016-08-18 10:05:57,115 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:59488 2016-08-18 10:05:57,127 INFO [main] hbase.HBaseTestingUtility(2469): Mini mapreduce cluster started 2016-08-18 10:05:57,127 INFO [main] backup.TestBackupBase(110): ROOTDIR hdfs://localhost:59388/backupUT 2016-08-18 10:05:57,127 INFO [main] backup.TestBackupBase(112): REMOTE ROOTDIR hdfs://localhost:59425/backupUT 2016-08-18 10:05:57,140 DEBUG [main] client.ConnectionImplementation(604): Table hbase:backup should be available 2016-08-18 10:05:57,141 DEBUG [main] backup.TestBackupBase(125): backup table exists and available 2016-08-18 10:05:57,197 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:05:57,197 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59489; # active connections: 3 2016-08-18 10:05:57,198 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:05:57,198 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59489 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:05:57,249 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.HMaster(2491): Client=tyu//10.22.9.171 creating {NAME => 'ns1'} 2016-08-18 10:05:57,356 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns1) id=5 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:57,402 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-18 10:05:57,519 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-18 10:05:57,577 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:57,687 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-18 10:05:57,690 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-18 10:05:57,690 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:57,690 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-18 10:05:57,722 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-18 10:05:57,900 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 542msec: CreateNamespaceProcedure (Namespace=ns1) id=5 owner=tyu state=FINISHED 2016-08-18 10:05:58,029 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=5 2016-08-18 10:05:58,032 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.HMaster(2491): Client=tyu//10.22.9.171 creating {NAME => 'ns2'} 2016-08-18 10:05:58,139 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns2) id=6 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:58,142 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-18 10:05:58,248 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-18 10:05:58,357 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:58,454 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-18 10:05:58,468 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-18 10:05:58,472 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns2 with data: \x0A\x03ns2 2016-08-18 10:05:58,472 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-18 10:05:58,472 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:58,472 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-18 10:05:58,682 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 542msec: CreateNamespaceProcedure (Namespace=ns2) id=6 owner=tyu state=FINISHED 2016-08-18 10:05:58,762 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=6 2016-08-18 10:05:58,764 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.HMaster(2491): Client=tyu//10.22.9.171 creating {NAME => 'ns3'} 2016-08-18 10:05:58,868 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns3) id=7 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:58,871 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-18 10:05:58,973 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-18 10:05:59,082 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:59,176 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-18 10:05:59,189 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-18 10:05:59,193 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns2 with data: \x0A\x03ns2 2016-08-18 10:05:59,193 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-18 10:05:59,193 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns3 with data: \x0A\x03ns3 2016-08-18 10:05:59,193 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:59,193 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-18 10:05:59,405 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 533msec: CreateNamespaceProcedure (Namespace=ns3) id=7 owner=tyu state=FINISHED 2016-08-18 10:05:59,479 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=7 2016-08-18 10:05:59,480 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.HMaster(2491): Client=tyu//10.22.9.171 creating {NAME => 'ns4'} 2016-08-18 10:05:59,589 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateNamespaceProcedure (Namespace=ns4) id=8 owner=tyu state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-08-18 10:05:59,592 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-18 10:05:59,696 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-18 10:05:59,807 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:05:59,904 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-18 10:05:59,917 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-08-18 10:05:59,920 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns2 with data: \x0A\x03ns2 2016-08-18 10:05:59,920 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns1 with data: \x0A\x03ns1 2016-08-18 10:05:59,921 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns4 with data: \x0A\x03ns4 2016-08-18 10:05:59,921 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node ns3 with data: \x0A\x03ns3 2016-08-18 10:05:59,921 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-08-18 10:05:59,921 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-08-18 10:06:00,131 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 545msec: CreateNamespaceProcedure (Namespace=ns4) id=8 owner=tyu state=FINISHED 2016-08-18 10:06:00,208 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=8 2016-08-18 10:06:00,228 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns1:test-1471539957141', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:06:00,336 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns1:test-1471539957141) id=9 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:06:00,340 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:test-1471539957141/write-master:593960000000000 2016-08-18 10:06:00,356 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-18 10:06:00,459 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-18 10:06:00,461 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:06:00,464 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/test-1471539957141/.tabledesc/.tableinfo.0000000001 2016-08-18 10:06:00,465 INFO [RegionOpenAndInitThread-ns1:test-1471539957141-1] regionserver.HRegion(6162): creating HRegion ns1:test-1471539957141 HTD == 'ns1:test-1471539957141', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns1:test-1471539957141 2016-08-18 10:06:00,481 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:06:00,482 DEBUG [RegionOpenAndInitThread-ns1:test-1471539957141-1] regionserver.HRegion(736): Instantiated ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:00,482 DEBUG [RegionOpenAndInitThread-ns1:test-1471539957141-1] regionserver.HRegion(1419): Closing ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843.: disabling compactions & flushes 2016-08-18 10:06:00,482 DEBUG [RegionOpenAndInitThread-ns1:test-1471539957141-1] regionserver.HRegion(1446): Updates disabled for region ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:00,482 INFO [RegionOpenAndInitThread-ns1:test-1471539957141-1] regionserver.HRegion(1552): Closed ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:00,594 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":48}]},"row":"ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843."} 2016-08-18 10:06:00,596 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:00,597 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:06:00,663 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-18 10:06:00,706 INFO [ProcedureExecutor-1] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:06:00,707 ERROR [ProcedureExecutor-1] master.TableStateManager(134): Unable to get table ns1:test-1471539957141 state org.apache.hadoop.hbase.TableNotFoundException: ns1:test-1471539957141 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:00,707 INFO [ProcedureExecutor-1] master.RegionStates(1106): Transition {3c1d62f1b34f7382cb57de1ded772843 state=OFFLINE, ts=1471539960706, server=null} to {3c1d62f1b34f7382cb57de1ded772843 state=PENDING_OPEN, ts=1471539960707, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:00,708 INFO [ProcedureExecutor-1] master.RegionStateStore(207): Updating hbase:meta row ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:06:00,708 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:00,711 INFO [PriorityRpcServer.handler=1,queue=1,port=59399] regionserver.RSRpcServices(1666): Open ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:00,721 INFO [RS_OPEN_REGION-10.22.9.171:59399-1] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59399%2C1471539932874.regiongroup-2, suffix=, logDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874, archiveDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:06:00,724 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:00,730 INFO [RS_OPEN_REGION-10.22.9.171:59399-1] wal.FSHLog(1436): Slow sync cost: 6 ms, current pipeline: [] 2016-08-18 10:06:00,731 INFO [RS_OPEN_REGION-10.22.9.171:59399-1] wal.FSHLog(890): New WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:00,731 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(6339): Opening region: {ENCODED => 3c1d62f1b34f7382cb57de1ded772843, NAME => 'ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:06:00,732 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-1471539957141 3c1d62f1b34f7382cb57de1ded772843 2016-08-18 10:06:00,732 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(736): Instantiated ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:00,735 INFO [StoreOpener-3c1d62f1b34f7382cb57de1ded772843-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:06:00,736 INFO [StoreOpener-3c1d62f1b34f7382cb57de1ded772843-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:06:00,737 DEBUG [StoreOpener-3c1d62f1b34f7382cb57de1ded772843-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f 2016-08-18 10:06:00,738 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843 2016-08-18 10:06:00,743 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:06:00,743 INFO [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(871): Onlined 3c1d62f1b34f7382cb57de1ded772843; next sequenceid=2 2016-08-18 10:06:00,744 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:00,745 INFO [PostOpenDeployTasks:3c1d62f1b34f7382cb57de1ded772843] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:00,746 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {3c1d62f1b34f7382cb57de1ded772843 state=PENDING_OPEN, ts=1471539960707, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:06:00,746 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStates(1106): Transition {3c1d62f1b34f7382cb57de1ded772843 state=PENDING_OPEN, ts=1471539960707, server=10.22.9.171,59399,1471539932874} to {3c1d62f1b34f7382cb57de1ded772843 state=OPEN, ts=1471539960746, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:00,746 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:06:00,746 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:00,748 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStates(452): Onlined 3c1d62f1b34f7382cb57de1ded772843 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:00,748 DEBUG [ProcedureExecutor-1] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:06:00,748 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539960748,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:test-1471539957141"} 2016-08-18 10:06:00,748 ERROR [B.defaultRpcServer.handler=4,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns1:test-1471539957141 state org.apache.hadoop.hbase.TableNotFoundException: ns1:test-1471539957141 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:06:00,749 DEBUG [PostOpenDeployTasks:3c1d62f1b34f7382cb57de1ded772843] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:00,749 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] handler.OpenRegionHandler(126): Opened ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:00,749 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:00,750 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1700): Updated table ns1:test-1471539957141 state to ENABLED in META 2016-08-18 10:06:00,965 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-18 10:06:01,077 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:test-1471539957141/write-master:593960000000000 2016-08-18 10:06:01,077 DEBUG [ProcedureExecutor-1] procedure2.ProcedureExecutor(870): Procedure completed in 737msec: CreateTableProcedure (table=ns1:test-1471539957141) id=9 owner=tyu state=FINISHED 2016-08-18 10:06:01,473 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=9 2016-08-18 10:06:01,474 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns1:test-1471539957141 completed 2016-08-18 10:06:01,475 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x75b4d63d connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:01,481 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x75b4d63d0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:01,482 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42d6572d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:06:01,482 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:06:01,482 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:06:01,483 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x75b4d63d-0x1569e9d5541000c connected 2016-08-18 10:06:01,486 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:01,486 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59500; # active connections: 4 2016-08-18 10:06:01,486 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:01,487 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59500 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:01,493 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:01,493 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59501; # active connections: 2 2016-08-18 10:06:01,494 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:01,494 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59501 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:01,498 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,501 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,503 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,505 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,508 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,510 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,512 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,513 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,515 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,517 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,520 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,521 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,523 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,525 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,527 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,528 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,530 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,532 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,534 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,536 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,538 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,539 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,541 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,543 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,545 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,547 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,548 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,550 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,552 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,553 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,555 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,557 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,559 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,561 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,563 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,565 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,567 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,569 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,571 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,573 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,574 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,576 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,577 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,579 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,580 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,582 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,583 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,585 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,586 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,588 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,589 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,591 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,593 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,594 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,596 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,597 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,599 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,601 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,602 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,604 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,605 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,607 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,609 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,610 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,612 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,613 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,615 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,616 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,618 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,619 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,621 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,622 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,624 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,625 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,627 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,628 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,630 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,631 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,633 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,635 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,636 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,638 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,640 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,641 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,643 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,644 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,646 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,648 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,650 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,651 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,653 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,655 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,657 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,659 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,661 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,663 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,664 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,666 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,668 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:01,671 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns2:test-14715399571411', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:06:01,775 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns2:test-14715399571411) id=10 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:06:01,778 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-18 10:06:01,780 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:test-14715399571411/write-master:593960000000000 2016-08-18 10:06:01,880 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-18 10:06:01,902 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:06:01,907 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/test-14715399571411/.tabledesc/.tableinfo.0000000001 2016-08-18 10:06:01,909 INFO [RegionOpenAndInitThread-ns2:test-14715399571411-1] regionserver.HRegion(6162): creating HRegion ns2:test-14715399571411 HTD == 'ns2:test-14715399571411', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns2:test-14715399571411 2016-08-18 10:06:01,920 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:06:01,921 DEBUG [RegionOpenAndInitThread-ns2:test-14715399571411-1] regionserver.HRegion(736): Instantiated ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:01,921 DEBUG [RegionOpenAndInitThread-ns2:test-14715399571411-1] regionserver.HRegion(1419): Closing ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3.: disabling compactions & flushes 2016-08-18 10:06:01,921 DEBUG [RegionOpenAndInitThread-ns2:test-14715399571411-1] regionserver.HRegion(1446): Updates disabled for region ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:01,921 INFO [RegionOpenAndInitThread-ns2:test-14715399571411-1] regionserver.HRegion(1552): Closed ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:02,029 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":49}]},"row":"ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3."} 2016-08-18 10:06:02,030 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:02,031 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:06:02,087 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-18 10:06:02,140 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:06:02,141 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table ns2:test-14715399571411 state org.apache.hadoop.hbase.TableNotFoundException: ns2:test-14715399571411 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:02,141 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {1147a0b47ba2d478b911f466b29f0fc3 state=OFFLINE, ts=1471539962140, server=null} to {1147a0b47ba2d478b911f466b29f0fc3 state=PENDING_OPEN, ts=1471539962141, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:02,141 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:06:02,142 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:02,144 INFO [PriorityRpcServer.handler=2,queue=0,port=59399] regionserver.RSRpcServices(1666): Open ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:02,152 INFO [RS_OPEN_REGION-10.22.9.171:59399-2] wal.FSHLog(530): WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=10.22.9.171%2C59399%2C1471539932874.regiongroup-3, suffix=, logDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874, archiveDir=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:06:02,154 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,158 INFO [RS_OPEN_REGION-10.22.9.171:59399-2] wal.FSHLog(1436): Slow sync cost: 4 ms, current pipeline: [] 2016-08-18 10:06:02,158 INFO [RS_OPEN_REGION-10.22.9.171:59399-2] wal.FSHLog(890): New WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,159 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(6339): Opening region: {ENCODED => 1147a0b47ba2d478b911f466b29f0fc3, NAME => 'ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:06:02,160 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-14715399571411 1147a0b47ba2d478b911f466b29f0fc3 2016-08-18 10:06:02,160 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(736): Instantiated ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:02,164 INFO [StoreOpener-1147a0b47ba2d478b911f466b29f0fc3-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:06:02,164 INFO [StoreOpener-1147a0b47ba2d478b911f466b29f0fc3-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:06:02,166 DEBUG [StoreOpener-1147a0b47ba2d478b911f466b29f0fc3-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f 2016-08-18 10:06:02,167 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3 2016-08-18 10:06:02,173 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:06:02,173 INFO [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(871): Onlined 1147a0b47ba2d478b911f466b29f0fc3; next sequenceid=2 2016-08-18 10:06:02,174 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,175 INFO [PostOpenDeployTasks:1147a0b47ba2d478b911f466b29f0fc3] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:02,175 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {1147a0b47ba2d478b911f466b29f0fc3 state=PENDING_OPEN, ts=1471539962141, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:06:02,175 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(1106): Transition {1147a0b47ba2d478b911f466b29f0fc3 state=PENDING_OPEN, ts=1471539962141, server=10.22.9.171,59399,1471539932874} to {1147a0b47ba2d478b911f466b29f0fc3 state=OPEN, ts=1471539962175, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:02,175 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:06:02,176 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:02,177 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(452): Onlined 1147a0b47ba2d478b911f466b29f0fc3 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:02,177 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:06:02,177 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539962177,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:test-14715399571411"} 2016-08-18 10:06:02,177 ERROR [B.defaultRpcServer.handler=3,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns2:test-14715399571411 state org.apache.hadoop.hbase.TableNotFoundException: ns2:test-14715399571411 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:06:02,178 DEBUG [PostOpenDeployTasks:1147a0b47ba2d478b911f466b29f0fc3] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:02,178 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] handler.OpenRegionHandler(126): Opened ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:02,178 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:02,179 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns2:test-14715399571411 state to ENABLED in META 2016-08-18 10:06:02,389 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-18 10:06:02,503 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:test-14715399571411/write-master:593960000000000 2016-08-18 10:06:02,503 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 723msec: CreateTableProcedure (table=ns2:test-14715399571411) id=10 owner=tyu state=FINISHED 2016-08-18 10:06:02,896 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=10 2016-08-18 10:06:02,896 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns2:test-14715399571411 completed 2016-08-18 10:06:02,902 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,905 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,907 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,909 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,911 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,913 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,915 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,917 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,919 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,921 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,923 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,925 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,926 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,928 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,930 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,932 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,934 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,935 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,937 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,939 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,941 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,942 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,944 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,946 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,947 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,949 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,950 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,951 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,953 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,955 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,956 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,957 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,959 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,960 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,962 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,964 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,965 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,967 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,968 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,970 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,972 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,974 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,975 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,977 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,979 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,981 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,982 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,984 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,986 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,987 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,989 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,990 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,992 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,993 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,994 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,996 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:02,998 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,000 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,001 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,003 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,005 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,007 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,009 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,010 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,012 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,013 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,015 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,016 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,018 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,019 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,021 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,022 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,024 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,025 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,027 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,028 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,030 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,031 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,033 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,034 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,036 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,037 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,039 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,040 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,042 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,043 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,045 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,046 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,048 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,050 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,051 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,053 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,054 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,056 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,057 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,059 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,061 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,062 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,064 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:03,066 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns3:test-14715399571412', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:06:03,174 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns3:test-14715399571412) id=11 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:06:03,178 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-18 10:06:03,180 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:test-14715399571412/write-master:593960000000000 2016-08-18 10:06:03,286 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-18 10:06:03,301 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741847_1023{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 296 2016-08-18 10:06:03,491 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-18 10:06:03,710 DEBUG [ProcedureExecutor-2] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/test-14715399571412/.tabledesc/.tableinfo.0000000001 2016-08-18 10:06:03,711 INFO [RegionOpenAndInitThread-ns3:test-14715399571412-1] regionserver.HRegion(6162): creating HRegion ns3:test-14715399571412 HTD == 'ns3:test-14715399571412', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns3:test-14715399571412 2016-08-18 10:06:03,721 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741848_1024{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 50 2016-08-18 10:06:03,796 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-18 10:06:04,127 DEBUG [RegionOpenAndInitThread-ns3:test-14715399571412-1] regionserver.HRegion(736): Instantiated ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:04,127 DEBUG [RegionOpenAndInitThread-ns3:test-14715399571412-1] regionserver.HRegion(1419): Closing ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7.: disabling compactions & flushes 2016-08-18 10:06:04,128 DEBUG [RegionOpenAndInitThread-ns3:test-14715399571412-1] regionserver.HRegion(1446): Updates disabled for region ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:04,128 INFO [RegionOpenAndInitThread-ns3:test-14715399571412-1] regionserver.HRegion(1552): Closed ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:04,240 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":49}]},"row":"ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7."} 2016-08-18 10:06:04,241 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:04,242 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:06:04,303 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-18 10:06:04,350 INFO [ProcedureExecutor-2] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:06:04,351 ERROR [ProcedureExecutor-2] master.TableStateManager(134): Unable to get table ns3:test-14715399571412 state org.apache.hadoop.hbase.TableNotFoundException: ns3:test-14715399571412 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:04,351 INFO [ProcedureExecutor-2] master.RegionStates(1106): Transition {b3b808604c7a4b394d3cdc0636a4d8d7 state=OFFLINE, ts=1471539964350, server=null} to {b3b808604c7a4b394d3cdc0636a4d8d7 state=PENDING_OPEN, ts=1471539964351, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:04,351 INFO [ProcedureExecutor-2] master.RegionStateStore(207): Updating hbase:meta row ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:06:04,352 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:04,353 INFO [PriorityRpcServer.handler=4,queue=0,port=59399] regionserver.RSRpcServices(1666): Open ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:04,359 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(6339): Opening region: {ENCODED => b3b808604c7a4b394d3cdc0636a4d8d7, NAME => 'ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:06:04,359 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-14715399571412 b3b808604c7a4b394d3cdc0636a4d8d7 2016-08-18 10:06:04,359 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(736): Instantiated ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:04,363 INFO [StoreOpener-b3b808604c7a4b394d3cdc0636a4d8d7-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:06:04,363 INFO [StoreOpener-b3b808604c7a4b394d3cdc0636a4d8d7-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:06:04,364 DEBUG [StoreOpener-b3b808604c7a4b394d3cdc0636a4d8d7-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/test-14715399571412/b3b808604c7a4b394d3cdc0636a4d8d7/f 2016-08-18 10:06:04,365 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/test-14715399571412/b3b808604c7a4b394d3cdc0636a4d8d7 2016-08-18 10:06:04,371 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/test-14715399571412/b3b808604c7a4b394d3cdc0636a4d8d7/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:06:04,371 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(871): Onlined b3b808604c7a4b394d3cdc0636a4d8d7; next sequenceid=2 2016-08-18 10:06:04,372 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:04,373 INFO [PostOpenDeployTasks:b3b808604c7a4b394d3cdc0636a4d8d7] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:04,374 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {b3b808604c7a4b394d3cdc0636a4d8d7 state=PENDING_OPEN, ts=1471539964351, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:06:04,374 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStates(1106): Transition {b3b808604c7a4b394d3cdc0636a4d8d7 state=PENDING_OPEN, ts=1471539964351, server=10.22.9.171,59399,1471539932874} to {b3b808604c7a4b394d3cdc0636a4d8d7 state=OPEN, ts=1471539964374, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:04,374 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:06:04,374 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:04,375 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStates(452): Onlined b3b808604c7a4b394d3cdc0636a4d8d7 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:04,375 DEBUG [ProcedureExecutor-2] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:06:04,375 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539964375,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:test-14715399571412"} 2016-08-18 10:06:04,375 ERROR [B.defaultRpcServer.handler=1,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns3:test-14715399571412 state org.apache.hadoop.hbase.TableNotFoundException: ns3:test-14715399571412 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:06:04,376 DEBUG [PostOpenDeployTasks:b3b808604c7a4b394d3cdc0636a4d8d7] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:04,376 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] handler.OpenRegionHandler(126): Opened ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:04,377 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:04,377 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns3:test-14715399571412 state to ENABLED in META 2016-08-18 10:06:04,697 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:test-14715399571412/write-master:593960000000000 2016-08-18 10:06:04,697 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 1.5250sec: CreateTableProcedure (table=ns3:test-14715399571412) id=11 owner=tyu state=FINISHED 2016-08-18 10:06:05,306 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=11 2016-08-18 10:06:05,307 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns3:test-14715399571412 completed 2016-08-18 10:06:05,324 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-18 10:06:05,333 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-18 10:06:05,335 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns4:test-14715399571413', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:06:05,441 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns4:test-14715399571413) id=12 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:06:05,445 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-18 10:06:05,447 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:test-14715399571413/write-master:593960000000000 2016-08-18 10:06:05,553 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-18 10:06:05,566 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741849_1025{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 296 2016-08-18 10:06:05,760 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-18 10:06:05,975 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/test-14715399571413/.tabledesc/.tableinfo.0000000001 2016-08-18 10:06:05,976 INFO [RegionOpenAndInitThread-ns4:test-14715399571413-1] regionserver.HRegion(6162): creating HRegion ns4:test-14715399571413 HTD == 'ns4:test-14715399571413', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns4:test-14715399571413 2016-08-18 10:06:05,986 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741850_1026{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 50 2016-08-18 10:06:06,066 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-18 10:06:06,252 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:06:06,393 DEBUG [RegionOpenAndInitThread-ns4:test-14715399571413-1] regionserver.HRegion(736): Instantiated ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:06,393 DEBUG [RegionOpenAndInitThread-ns4:test-14715399571413-1] regionserver.HRegion(1419): Closing ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7.: disabling compactions & flushes 2016-08-18 10:06:06,393 DEBUG [RegionOpenAndInitThread-ns4:test-14715399571413-1] regionserver.HRegion(1446): Updates disabled for region ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:06,394 INFO [RegionOpenAndInitThread-ns4:test-14715399571413-1] regionserver.HRegion(1552): Closed ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:06,508 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":49}]},"row":"ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7."} 2016-08-18 10:06:06,509 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:06,509 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:06:06,573 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-18 10:06:06,614 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:06:06,615 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table ns4:test-14715399571413 state org.apache.hadoop.hbase.TableNotFoundException: ns4:test-14715399571413 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:06,616 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {12e7d6010d0ab46d9061da5bf6f5e4b7 state=OFFLINE, ts=1471539966614, server=null} to {12e7d6010d0ab46d9061da5bf6f5e4b7 state=PENDING_OPEN, ts=1471539966615, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:06,616 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:06:06,616 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:06,618 INFO [PriorityRpcServer.handler=3,queue=1,port=59399] regionserver.RSRpcServices(1666): Open ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:06,622 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(6339): Opening region: {ENCODED => 12e7d6010d0ab46d9061da5bf6f5e4b7, NAME => 'ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:06:06,623 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table test-14715399571413 12e7d6010d0ab46d9061da5bf6f5e4b7 2016-08-18 10:06:06,623 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(736): Instantiated ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:06,626 INFO [StoreOpener-12e7d6010d0ab46d9061da5bf6f5e4b7-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=0, currentSize=1071768, freeSize=1042890536, maxSize=1043962304, heapSize=1071768, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:06:06,627 INFO [StoreOpener-12e7d6010d0ab46d9061da5bf6f5e4b7-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:06:06,628 DEBUG [StoreOpener-12e7d6010d0ab46d9061da5bf6f5e4b7-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/test-14715399571413/12e7d6010d0ab46d9061da5bf6f5e4b7/f 2016-08-18 10:06:06,629 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/test-14715399571413/12e7d6010d0ab46d9061da5bf6f5e4b7 2016-08-18 10:06:06,635 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/test-14715399571413/12e7d6010d0ab46d9061da5bf6f5e4b7/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:06:06,635 INFO [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(871): Onlined 12e7d6010d0ab46d9061da5bf6f5e4b7; next sequenceid=2 2016-08-18 10:06:06,636 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:06,640 INFO [PostOpenDeployTasks:12e7d6010d0ab46d9061da5bf6f5e4b7] regionserver.HRegionServer(1952): Post open deploy tasks for ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:06,641 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {12e7d6010d0ab46d9061da5bf6f5e4b7 state=PENDING_OPEN, ts=1471539966615, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:06:06,641 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStates(1106): Transition {12e7d6010d0ab46d9061da5bf6f5e4b7 state=PENDING_OPEN, ts=1471539966615, server=10.22.9.171,59399,1471539932874} to {12e7d6010d0ab46d9061da5bf6f5e4b7 state=OPEN, ts=1471539966641, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:06:06,641 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:06:06,641 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:06,642 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStates(452): Onlined 12e7d6010d0ab46d9061da5bf6f5e4b7 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:06,642 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:06:06,643 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471539966643,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:test-14715399571413"} 2016-08-18 10:06:06,643 ERROR [B.defaultRpcServer.handler=4,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns4:test-14715399571413 state org.apache.hadoop.hbase.TableNotFoundException: ns4:test-14715399571413 at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:06:06,643 DEBUG [PostOpenDeployTasks:12e7d6010d0ab46d9061da5bf6f5e4b7] regionserver.HRegionServer(1979): Finished post open deploy task for ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:06,644 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] handler.OpenRegionHandler(126): Opened ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:06,644 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:06,645 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table ns4:test-14715399571413 state to ENABLED in META 2016-08-18 10:06:06,975 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:test-14715399571413/write-master:593960000000000 2016-08-18 10:06:06,975 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 1.5300sec: CreateTableProcedure (table=ns4:test-14715399571413) id=12 owner=tyu state=FINISHED 2016-08-18 10:06:07,576 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=12 2016-08-18 10:06:07,577 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns4:test-14715399571413 completed 2016-08-18 10:06:07,577 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-18 10:06:07,584 INFO [main] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2016-08-18 10:06:07,584 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541000c 2016-08-18 10:06:07,587 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:06:07,591 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59500 because read count=-1. Number of active connections: 4 2016-08-18 10:06:07,591 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59501 because read count=-1. Number of active connections: 2 2016-08-18 10:06:07,592 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (2014530893) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:06:07,592 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (439534444) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:06:07,666 INFO [main] hbase.ResourceChecker(148): before: backup.TestIncrementalBackup#TestIncBackupRestore Thread=792, OpenFileDescriptor=1032, MaxFileDescriptor=10240, SystemLoadAverage=223, ProcessCount=273, AvailableMemoryMB=1310 2016-08-18 10:06:07,666 WARN [main] hbase.ResourceChecker(135): Thread=792 is superior to 500 2016-08-18 10:06:07,666 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=1032 is superior to 1024 2016-08-18 10:06:07,666 INFO [main] backup.TestIncrementalBackup(50): create full backup image for all tables 2016-08-18 10:06:07,667 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x27c3024f connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:07,672 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x27c3024f0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:07,673 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a772d19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:06:07,673 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:06:07,673 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:06:07,674 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x27c3024f-0x1569e9d5541000d connected 2016-08-18 10:06:07,694 INFO [main] util.BackupClientUtil(107): Backup root dir hdfs://localhost:59388/backupUT does not exist. Will be created. 2016-08-18 10:06:07,697 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:07,698 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59510; # active connections: 4 2016-08-18 10:06:07,699 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:07,699 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59510 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:07,706 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:06:07,706 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59511; # active connections: 5 2016-08-18 10:06:07,707 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:07,707 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59511 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:07,753 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4a34f6c1 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:07,755 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4a34f6c10x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:07,756 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35c98fcf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:06:07,757 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:06:07,757 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:06:07,757 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x4a34f6c1-0x1569e9d5541000e connected 2016-08-18 10:06:07,758 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] backup.BackupInfo(125): CreateBackupContext: 4 ns1:test-1471539957141 2016-08-18 10:06:07,991 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure FullTableBackupProcedure (targetRootDir=hdfs://localhost:59388/backupUT; backupId=backup_1471539967737; tables=ns1:test-1471539957141,ns2:test-14715399571411,ns3:test-14715399571412,ns4:test-14715399571413) id=13 state=RUNNABLE:PRE_SNAPSHOT_TABLE added to the store. 2016-08-18 10:06:07,994 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:593960000000001 2016-08-18 10:06:07,995 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:07,996 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(130): Backup backup_1471539967737 started at 1471539967995. 2016-08-18 10:06:07,996 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471539967737 set status=RUNNING 2016-08-18 10:06:08,009 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:08,009 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59513; # active connections: 6 2016-08-18 10:06:08,010 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:08,011 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59513 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:08,015 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:08,015 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59514; # active connections: 2 2016-08-18 10:06:08,016 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:08,016 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59514 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:08,017 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:08,018 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(134): Backup session backup_1471539967737 has been started. 2016-08-18 10:06:08,019 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(180): read backup start code from hbase:backup 2016-08-18 10:06:08,020 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(205): write backup start code to hbase:backup 0 2016-08-18 10:06:08,021 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:08,023 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(522): Execute roll log procedure for full backup ... 2016-08-18 10:06:08,073 DEBUG [ProcedureExecutor-4] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2016-08-18 10:06:08,073 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2016-08-18 10:06:08,073 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:06:08,074 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2016-08-18 10:06:08,074 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2016-08-18 10:06:08,075 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:08,075 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:08,076 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:08,076 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:08,076 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:08,076 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:08,076 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:08,076 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,076 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:08,076 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:08,077 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,077 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,077 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:08,077 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:08,077 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,077 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 10:06:08,078 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:08,078 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 10:06:08,078 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:08,078 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 10:06:08,078 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 10:06:08,078 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:08,078 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 10:06:08,088 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 10:06:08,088 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 10:06:08,088 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 10:06:08,089 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 10:06:08,089 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 10:06:08,089 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:06:08,089 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 10:06:08,090 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:06:08,090 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 10:06:08,091 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 10:06:08,091 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 10:06:08,091 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 10:06:08,091 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 10:06:08,091 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 10:06:08,092 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,092 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,093 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,092 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,093 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,093 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,093 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:08,093 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,093 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:08,093 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,093 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:08,093 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:08,094 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:08,094 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:08,095 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,095 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,096 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:08,096 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:08,096 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 10:06:08,096 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@73857ebe[Count = 1] remaining members to acquire global barrier 2016-08-18 10:06:08,097 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,097 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,097 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,097 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:08,097 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,097 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:08,097 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:08,097 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:08,098 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:08,098 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,098 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,098 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:08,099 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:08,099 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 10:06:08,099 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@73857ebe[Count = 0] remaining members to acquire global barrier 2016-08-18 10:06:08,099 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2016-08-18 10:06:08,099 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,100 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 10:06:08,100 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,100 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:08,100 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,100 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 10:06:08,100 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 10:06:08,100 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:08,101 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:08,101 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:08,101 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,102 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,102 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:08,102 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:08,102 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:08,103 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:08,108 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 10:06:08,108 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool20-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,108 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool19-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,108 INFO [rs(10.22.9.171,59396,1471539932179)-backup-pool19-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471539936418 on 10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,108 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 10:06:08,108 INFO [rs(10.22.9.171,59399,1471539932874)-backup-pool20-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471539936418 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,108 DEBUG [master//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 10:06:08,108 DEBUG [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 10:06:08,111 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:08,112 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:06:08,118 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:08,118 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:06:08,119 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:08,119 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:06:08,124 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741830_1006{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:06:08,124 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741843_1019{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 10957 2016-08-18 10:06:08,303 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:08,527 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:06:08,527 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 with entries=100, filesize=10.70 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:08,528 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:06:08,530 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:08,535 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:08,536 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:06:08,536 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:08,540 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 83 2016-08-18 10:06:08,542 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:06:08,543 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 with entries=100, filesize=10.80 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:08,543 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:06:08,545 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:06:08,547 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 83 2016-08-18 10:06:08,548 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 with entries=7, filesize=981 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:06:08,550 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:08,551 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:08,554 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741831_1007{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 387 2016-08-18 10:06:08,558 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool19-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 450 2016-08-18 10:06:08,558 INFO [rs(10.22.9.171,59396,1471539932179)-backup-pool19-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471539968108 on 10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,558 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool19-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:06:08,562 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:08,562 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59520; # active connections: 3 2016-08-18 10:06:08,563 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:08,563 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59520 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:08,566 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool19-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 10:06:08,567 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:08,567 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 10:06:08,567 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,59396,1471539932179' in zk 2016-08-18 10:06:08,568 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,568 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 10:06:08,568 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,568 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:08,568 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,569 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 10:06:08,569 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,569 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:08,569 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:08,569 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:08,570 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:08,570 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:08,570 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,571 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:08,571 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:08,571 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:08,571 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:08,572 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,59396,1471539932179': 2016-08-18 10:06:08,572 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59396,1471539932179' released barrier for procedure'rolllog', counting down latch. Waiting for 1 more 2016-08-18 10:06:08,608 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:08,958 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 with entries=1, filesize=387 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:06:08,959 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:08,964 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:08,968 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:08,969 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:08,973 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1629 2016-08-18 10:06:09,113 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:09,378 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 with entries=5, filesize=1.59 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:09,395 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool20-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 1287 2016-08-18 10:06:09,395 INFO [rs(10.22.9.171,59399,1471539932874)-backup-pool20-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471539968543 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,395 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool20-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:06:09,400 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:09,400 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59524; # active connections: 7 2016-08-18 10:06:09,401 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu.hfs.0 (auth:SIMPLE) 2016-08-18 10:06:09,401 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59524 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:09,406 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool20-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 10:06:09,407 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:09,408 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 10:06:09,408 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,59399,1471539932874' in zk 2016-08-18 10:06:09,411 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,411 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 10:06:09,411 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,411 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,411 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,411 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:09,411 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:09,411 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:09,412 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 10:06:09,412 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:09,413 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:09,413 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,413 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,414 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:09,414 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:09,415 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:09,415 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,415 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,416 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,59399,1471539932874': 2016-08-18 10:06:09,416 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59399,1471539932874' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2016-08-18 10:06:09,416 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2016-08-18 10:06:09,416 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 10:06:09,416 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 10:06:09,416 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:rolllog 2016-08-18 10:06:09,416 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-18 10:06:09,417 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,417 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,417 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,417 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,417 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,417 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,417 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:06:09,417 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,418 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:06:09,418 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,418 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:09,418 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:06:09,418 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:09,418 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,418 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,418 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:09,418 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:09,419 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,419 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,419 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,419 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:09,420 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,420 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:09,420 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:09,420 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:09,421 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,421 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,421 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:06:09,421 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:06:09,421 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:06:09,422 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,424 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,424 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:09,424 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,424 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:09,424 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:09,424 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:09,425 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:09,425 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:09,425 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:09,425 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:09,425 INFO [ProcedureExecutor-4] master.LogRollMasterProcedureManager(116): Done waiting - exec procedure for rolllog 2016-08-18 10:06:09,425 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:09,425 INFO [ProcedureExecutor-4] master.LogRollMasterProcedureManager(117): Distributed roll log procedure is successful! 2016-08-18 10:06:09,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:06:09,426 DEBUG [ProcedureExecutor-4] procedure.MasterProcedureUtil(101): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 857 ms per retry) 2016-08-18 10:06:09,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:09,426 DEBUG [ProcedureExecutor-4] procedure.MasterProcedureUtil(110): (#1) Sleeping: 100ms while waiting for procedure completion. 2016-08-18 10:06:09,426 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:06:09,426 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:06:09,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:09,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:09,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:09,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:09,426 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:06:09,426 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:06:09,426 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:06:09,526 DEBUG [ProcedureExecutor-4] procedure.MasterProcedureUtil(116): Getting current status of procedure from master... 2016-08-18 10:06:09,526 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:06:09,560 WARN [ProcedureExecutor-4] wal.DefaultWALProvider(349): Cannot parse a server name from path=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694; Not a host:port pair: 10.22.9.171,59396,1471539932179.meta 2016-08-18 10:06:09,560 WARN [ProcedureExecutor-4] util.BackupServerUtil(237): Skip log file (can't parse): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:06:09,565 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(480): add WAL files to hbase:backup: backup_1471539967737 hdfs://localhost:59388/backupUT files [hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418] 2016-08-18 10:06:09,565 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:06:09,565 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:09,567 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:09,717 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1471539969681_ns1_test-1471539957141 from backupContext to request snapshot for backup. 2016-08-18 10:06:09,719 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1471539969681_ns1_test-1471539957141 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1471539969681_ns1_test-1471539957141' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:09,721 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-18 10:06:09,722 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-18 10:06:09,759 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:test-1471539957141/write-master:593960000000001 2016-08-18 10:06:09,760 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1471539969681_ns1_test-1471539957141 C_M_SNAPSHOT_TABLE on table ns1:test-1471539957141 2016-08-18 10:06:09,761 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH } 2016-08-18 10:06:09,762 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-18 10:06:09,762 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-18 10:06:09,769 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741857_1033{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 73 2016-08-18 10:06:09,862 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:09,863 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH }' is still in progress! 2016-08-18 10:06:09,863 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#2) Sleeping: 200ms while waiting for snapshot completion. 2016-08-18 10:06:10,064 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:10,065 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH }' is still in progress! 2016-08-18 10:06:10,065 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#3) Sleeping: 300ms while waiting for snapshot completion. 2016-08-18 10:06:10,116 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:10,176 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,176 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1471539969681_ns1_test-1471539957141' 2016-08-18 10:06:10,177 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:10,177 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(204): Procedure 'snapshot_1471539969681_ns1_test-1471539957141' starting 'acquire' 2016-08-18 10:06:10,177 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1471539969681_ns1_test-1471539957141', kicking off acquire phase on members. 2016-08-18 10:06:10,177 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,177 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,180 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:10,180 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,180 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:10,181 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:10,181 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:10,180 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:10,181 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:10,181 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,181 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 10:06:10,181 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,181 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,182 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,182 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,182 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 77 2016-08-18 10:06:10,182 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,182 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 77 2016-08-18 10:06:10,182 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,182 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539969681_ns1_test-1471539957141 from table ns1:test-1471539957141 type FLUSH 2016-08-18 10:06:10,182 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539969681_ns1_test-1471539957141 from table ns1:test-1471539957141 type FLUSH 2016-08-18 10:06:10,192 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,192 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,192 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' with timeout 300000ms 2016-08-18 10:06:10,192 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:10,192 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' with timeout 300000ms 2016-08-18 10:06:10,193 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:10,193 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' starting 'acquire' stage 2016-08-18 10:06:10,194 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' locally acquired 2016-08-18 10:06:10,194 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure (snapshot_1471539969681_ns1_test-1471539957141) in zk 2016-08-18 10:06:10,194 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' starting 'acquire' stage 2016-08-18 10:06:10,194 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' locally acquired 2016-08-18 10:06:10,194 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure (snapshot_1471539969681_ns1_test-1471539957141) in zk 2016-08-18 10:06:10,196 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,196 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,196 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,196 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,196 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:10,196 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,196 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,197 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:10,196 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:10,196 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:10,197 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:10,198 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,198 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,198 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:10,199 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:10,199 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:10,199 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure 'snapshot_1471539969681_ns1_test-1471539957141' on coordinator 2016-08-18 10:06:10,200 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@1844d391[Count = 0] remaining members to acquire global barrier 2016-08-18 10:06:10,200 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(212): Procedure 'snapshot_1471539969681_ns1_test-1471539957141' starting 'in-barrier' execution. 2016-08-18 10:06:10,200 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,200 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,200 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,200 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,200 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,200 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,201 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,200 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,201 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,201 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' received 'reached' from coordinator. 2016-08-18 10:06:10,201 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 10:06:10,201 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:10,201 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:10,201 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:10,202 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,202 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:10,202 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-18 10:06:10,202 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:10,202 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:10,202 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-18 10:06:10,202 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. started... 2016-08-18 10:06:10,203 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:10,203 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=16.16 KB 2016-08-18 10:06:10,204 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:10,204 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,204 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,204 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,204 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:10,204 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' received 'reached' from coordinator. 2016-08-18 10:06:10,204 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' locally completed 2016-08-18 10:06:10,205 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539969681_ns1_test-1471539957141' completed for member '10.22.9.171,59396,1471539932179' in zk 2016-08-18 10:06:10,205 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' has notified controller of completion 2016-08-18 10:06:10,205 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:10,205 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' completed. 2016-08-18 10:06:10,275 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:10,365 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:10,366 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH }' is still in progress! 2016-08-18 10:06:10,366 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#4) Sleeping: 500ms while waiting for snapshot completion. 2016-08-18 10:06:10,524 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741858_1034{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:06:10,870 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:10,871 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH }' is still in progress! 2016-08-18 10:06:10,871 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#5) Sleeping: 857ms while waiting for snapshot completion. 2016-08-18 10:06:10,930 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=103, memsize=16.2 K, hasBloomFilter=true, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/.tmp/2b064a5eb2b34ec7bc195a73be8392cb 2016-08-18 10:06:11,193 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/.tmp/2b064a5eb2b34ec7bc195a73be8392cb as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb 2016-08-18 10:06:11,202 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb, entries=99, sequenceid=103, filesize=8.1 K 2016-08-18 10:06:11,202 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:11,203 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] regionserver.HRegion(2545): Finished memstore flush of ~16.16 KB/16552, currentsize=0 B/0 for region ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. in 1000ms, sequenceid=103, compaction requested=false 2016-08-18 10:06:11,213 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.SnapshotManifest(203): Storing 'ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843.' region-info for snapshot. 2016-08-18 10:06:11,256 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-18 10:06:11,286 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb] hfiles 2016-08-18 10:06:11,286 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.SnapshotManifest(226): Adding reference for file (1/1): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb 2016-08-18 10:06:11,337 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741859_1035{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 90 2016-08-18 10:06:11,733 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:11,733 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH }' is still in progress! 2016-08-18 10:06:11,733 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#6) Sleeping: 857ms while waiting for snapshot completion. 2016-08-18 10:06:11,740 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. completed. 2016-08-18 10:06:11,741 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool22-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:06:11,741 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-18 10:06:11,741 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-18 10:06:11,741 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,741 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' locally completed 2016-08-18 10:06:11,741 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539969681_ns1_test-1471539957141' completed for member '10.22.9.171,59399,1471539932874' in zk 2016-08-18 10:06:11,745 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,745 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,745 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:11,745 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:11,745 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' has notified controller of completion 2016-08-18 10:06:11,745 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:11,745 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539969681_ns1_test-1471539957141' completed. 2016-08-18 10:06:11,746 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:11,747 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,747 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,747 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,748 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:11,748 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:11,749 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,749 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,749 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,750 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1471539969681_ns1_test-1471539957141' member '10.22.9.171,59399,1471539932874': 2016-08-18 10:06:11,750 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59399,1471539932874' released barrier for procedure'snapshot_1471539969681_ns1_test-1471539957141', counting down latch. Waiting for 0 more 2016-08-18 10:06:11,750 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(221): Procedure 'snapshot_1471539969681_ns1_test-1471539957141' execution completed 2016-08-18 10:06:11,750 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,750 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 10:06:11,750 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,750 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 10:06:11,750 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,751 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1471539969681_ns1_test-1471539957141including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-18 10:06:11,752 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,752 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,752 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,752 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,752 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,752 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:11,752 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:11,752 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:11,752 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,752 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:11,752 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:11,752 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:11,753 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,753 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,753 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,753 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,753 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,754 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:11,754 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,754 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,754 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,754 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:11,755 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,755 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,755 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,756 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,756 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,756 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:11,757 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:11,757 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:11,757 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:11,757 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,757 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 because node does not exist (not an error) 2016-08-18 10:06:11,758 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-18 10:06:11,758 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:11,758 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:11,758 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:11,758 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:11,758 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:11,758 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:11,759 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:11,759 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59396,1471539932179 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141/10.22.9.171,59399,1471539932874 2016-08-18 10:06:11,759 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,760 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,767 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:11,815 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741860_1036{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 382 2016-08-18 10:06:12,118 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:12,130 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:06:12,224 INFO [IPC Server handler 4 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741859_1035 127.0.0.1:59389 2016-08-18 10:06:12,252 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539969681_ns1_test-1471539957141 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:12,253 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1471539969681_ns1_test-1471539957141 of table ns1:test-1471539957141 completed 2016-08-18 10:06:12,253 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:12,255 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:test-1471539957141/write-master:593960000000001 2016-08-18 10:06:12,595 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:12,595 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1471539969681_ns1_test-1471539957141 table=ns1:test-1471539957141 type=FLUSH }' has completed, notifying client. 2016-08-18 10:06:12,595 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1471539972595_ns2_test-14715399571411 from backupContext to request snapshot for backup. 2016-08-18 10:06:12,597 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1471539972595_ns2_test-14715399571411 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1471539972595_ns2_test-14715399571411' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:12,598 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-18 10:06:12,599 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-18 10:06:12,605 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:test-14715399571411/write-master:593960000000001 2016-08-18 10:06:12,606 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1471539972595_ns2_test-14715399571411 table=ns2:test-14715399571411 type=FLUSH } 2016-08-18 10:06:12,606 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1471539972595_ns2_test-14715399571411 C_M_SNAPSHOT_TABLE on table ns2:test-14715399571411 2016-08-18 10:06:12,606 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1471539972595_ns2_test-14715399571411 table=ns2:test-14715399571411 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-18 10:06:12,606 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-18 10:06:12,613 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741861_1037{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 75 2016-08-18 10:06:12,708 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:12,708 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539972595_ns2_test-14715399571411 table=ns2:test-14715399571411 type=FLUSH }' is still in progress! 2016-08-18 10:06:12,708 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#2) Sleeping: 200ms while waiting for snapshot completion. 2016-08-18 10:06:12,911 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:12,912 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539972595_ns2_test-14715399571411 table=ns2:test-14715399571411 type=FLUSH }' is still in progress! 2016-08-18 10:06:12,912 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#3) Sleeping: 300ms while waiting for snapshot completion. 2016-08-18 10:06:13,011 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741859_1035] 2016-08-18 10:06:13,019 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,019 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1471539972595_ns2_test-14715399571411' 2016-08-18 10:06:13,019 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:13,019 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(204): Procedure 'snapshot_1471539972595_ns2_test-14715399571411' starting 'acquire' 2016-08-18 10:06:13,019 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1471539972595_ns2_test-14715399571411', kicking off acquire phase on members. 2016-08-18 10:06:13,020 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,020 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,023 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:13,023 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,023 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:13,023 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:13,023 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:13,023 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:13,023 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:13,023 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,023 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 10:06:13,024 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,024 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,024 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,024 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,024 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-18 10:06:13,024 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,024 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-18 10:06:13,025 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,024 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539972595_ns2_test-14715399571411 from table ns2:test-14715399571411 type FLUSH 2016-08-18 10:06:13,025 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539972595_ns2_test-14715399571411 from table ns2:test-14715399571411 type FLUSH 2016-08-18 10:06:13,025 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,025 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,028 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' with timeout 300000ms 2016-08-18 10:06:13,028 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' with timeout 300000ms 2016-08-18 10:06:13,028 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:13,028 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:13,028 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' starting 'acquire' stage 2016-08-18 10:06:13,029 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' locally acquired 2016-08-18 10:06:13,029 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure (snapshot_1471539972595_ns2_test-14715399571411) in zk 2016-08-18 10:06:13,029 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' starting 'acquire' stage 2016-08-18 10:06:13,029 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' locally acquired 2016-08-18 10:06:13,029 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure (snapshot_1471539972595_ns2_test-14715399571411) in zk 2016-08-18 10:06:13,030 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,030 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,030 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,030 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,031 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:13,030 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,031 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,031 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:13,031 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:13,031 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:13,031 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:13,032 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,032 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,032 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,032 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:13,033 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:13,033 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure 'snapshot_1471539972595_ns2_test-14715399571411' on coordinator 2016-08-18 10:06:13,033 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@3374fdb6[Count = 0] remaining members to acquire global barrier 2016-08-18 10:06:13,033 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(212): Procedure 'snapshot_1471539972595_ns2_test-14715399571411' starting 'in-barrier' execution. 2016-08-18 10:06:13,033 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,033 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,033 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,034 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,034 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,034 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,034 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,034 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:13,034 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:13,034 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,034 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,034 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' received 'reached' from coordinator. 2016-08-18 10:06:13,034 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 10:06:13,034 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:13,034 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-18 10:06:13,034 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-18 10:06:13,034 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:13,035 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. started... 2016-08-18 10:06:13,035 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,035 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=16.16 KB 2016-08-18 10:06:13,035 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,036 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,036 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:13,036 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:13,037 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:13,037 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,037 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,037 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,037 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,037 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' received 'reached' from coordinator. 2016-08-18 10:06:13,037 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' locally completed 2016-08-18 10:06:13,037 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539972595_ns2_test-14715399571411' completed for member '10.22.9.171,59396,1471539932179' in zk 2016-08-18 10:06:13,038 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' has notified controller of completion 2016-08-18 10:06:13,038 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:13,038 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' completed. 2016-08-18 10:06:13,053 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741862_1038{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:06:13,214 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:13,214 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539972595_ns2_test-14715399571411 table=ns2:test-14715399571411 type=FLUSH }' is still in progress! 2016-08-18 10:06:13,214 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#4) Sleeping: 500ms while waiting for snapshot completion. 2016-08-18 10:06:13,456 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=103, memsize=16.2 K, hasBloomFilter=true, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/.tmp/9ab6388f101244b1aa56bfbffbdfea2e 2016-08-18 10:06:13,468 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/.tmp/9ab6388f101244b1aa56bfbffbdfea2e as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e 2016-08-18 10:06:13,477 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e, entries=99, sequenceid=103, filesize=8.1 K 2016-08-18 10:06:13,478 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:13,479 INFO [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] regionserver.HRegion(2545): Finished memstore flush of ~16.16 KB/16552, currentsize=0 B/0 for region ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. in 443ms, sequenceid=103, compaction requested=false 2016-08-18 10:06:13,479 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(203): Storing 'ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3.' region-info for snapshot. 2016-08-18 10:06:13,479 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-18 10:06:13,479 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e] hfiles 2016-08-18 10:06:13,479 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.SnapshotManifest(226): Adding reference for file (1/1): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e 2016-08-18 10:06:13,487 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741863_1039{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:06:13,716 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:13,717 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539972595_ns2_test-14715399571411 table=ns2:test-14715399571411 type=FLUSH }' is still in progress! 2016-08-18 10:06:13,717 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#5) Sleeping: 857ms while waiting for snapshot completion. 2016-08-18 10:06:13,893 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. completed. 2016-08-18 10:06:13,893 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool23-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:06:13,894 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-18 10:06:13,894 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-18 10:06:13,894 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,894 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' locally completed 2016-08-18 10:06:13,894 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539972595_ns2_test-14715399571411' completed for member '10.22.9.171,59399,1471539932874' in zk 2016-08-18 10:06:13,897 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,897 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' has notified controller of completion 2016-08-18 10:06:13,897 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:13,897 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,898 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:13,897 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539972595_ns2_test-14715399571411' completed. 2016-08-18 10:06:13,898 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:13,899 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:13,899 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,900 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,900 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,901 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:13,901 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:13,902 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,902 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,902 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,903 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1471539972595_ns2_test-14715399571411' member '10.22.9.171,59399,1471539932874': 2016-08-18 10:06:13,903 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59399,1471539932874' released barrier for procedure'snapshot_1471539972595_ns2_test-14715399571411', counting down latch. Waiting for 0 more 2016-08-18 10:06:13,903 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(221): Procedure 'snapshot_1471539972595_ns2_test-14715399571411' execution completed 2016-08-18 10:06:13,903 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,903 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 10:06:13,903 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,903 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 10:06:13,904 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,904 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1471539972595_ns2_test-14715399571411including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-18 10:06:13,905 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,905 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,905 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,905 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,905 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,905 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:13,905 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:13,905 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:13,905 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:13,905 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,906 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:13,905 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:13,906 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,906 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,906 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,906 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,907 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,907 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:13,907 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,907 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,908 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:13,908 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,908 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,908 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,908 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,909 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,909 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,909 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:13,910 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:13,910 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:13,910 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:13,910 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 because node does not exist (not an error) 2016-08-18 10:06:13,910 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,910 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:13,910 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:13,911 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:13,911 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:13,911 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-18 10:06:13,911 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:13,911 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:13,911 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,911 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,911 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,911 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,911 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:13,911 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:13,912 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:13,912 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59396,1471539932179 2016-08-18 10:06:13,912 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,912 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,912 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411/10.22.9.171,59399,1471539932874 2016-08-18 10:06:13,912 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,912 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:13,922 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741864_1040{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 384 2016-08-18 10:06:14,327 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741863_1039 127.0.0.1:59389 2016-08-18 10:06:14,337 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539972595_ns2_test-14715399571411 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:14,338 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1471539972595_ns2_test-14715399571411 of table ns2:test-14715399571411 completed 2016-08-18 10:06:14,338 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:14,342 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:test-14715399571411/write-master:593960000000001 2016-08-18 10:06:14,578 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:14,578 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1471539972595_ns2_test-14715399571411 table=ns2:test-14715399571411 type=FLUSH }' has completed, notifying client. 2016-08-18 10:06:14,579 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1471539974579_ns3_test-14715399571412 from backupContext to request snapshot for backup. 2016-08-18 10:06:14,580 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1471539974579_ns3_test-14715399571412 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1471539974579_ns3_test-14715399571412' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:14,582 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-18 10:06:14,583 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-18 10:06:14,588 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:test-14715399571412/write-master:593960000000001 2016-08-18 10:06:14,589 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1471539974579_ns3_test-14715399571412 table=ns3:test-14715399571412 type=FLUSH } 2016-08-18 10:06:14,589 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1471539974579_ns3_test-14715399571412 C_M_SNAPSHOT_TABLE on table ns3:test-14715399571412 2016-08-18 10:06:14,589 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1471539974579_ns3_test-14715399571412 table=ns3:test-14715399571412 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-18 10:06:14,589 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-18 10:06:14,596 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741865_1041{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 75 2016-08-18 10:06:14,693 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:14,693 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539974579_ns3_test-14715399571412 table=ns3:test-14715399571412 type=FLUSH }' is still in progress! 2016-08-18 10:06:14,694 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#2) Sleeping: 200ms while waiting for snapshot completion. 2016-08-18 10:06:14,895 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:14,895 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539974579_ns3_test-14715399571412 table=ns3:test-14715399571412 type=FLUSH }' is still in progress! 2016-08-18 10:06:14,895 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#3) Sleeping: 300ms while waiting for snapshot completion. 2016-08-18 10:06:15,006 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,007 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1471539974579_ns3_test-14715399571412' 2016-08-18 10:06:15,007 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:15,007 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(204): Procedure 'snapshot_1471539974579_ns3_test-14715399571412' starting 'acquire' 2016-08-18 10:06:15,007 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1471539974579_ns3_test-14715399571412', kicking off acquire phase on members. 2016-08-18 10:06:15,007 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,008 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,010 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:15,010 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,010 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:15,011 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:15,010 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:15,011 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:15,011 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:15,011 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,011 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 10:06:15,011 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,011 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,012 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,012 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,012 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-18 10:06:15,012 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,012 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-18 10:06:15,012 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,012 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539974579_ns3_test-14715399571412 from table ns3:test-14715399571412 type FLUSH 2016-08-18 10:06:15,012 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539974579_ns3_test-14715399571412 from table ns3:test-14715399571412 type FLUSH 2016-08-18 10:06:15,013 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,013 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' with timeout 300000ms 2016-08-18 10:06:15,013 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,013 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:15,013 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' with timeout 300000ms 2016-08-18 10:06:15,014 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:15,014 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' starting 'acquire' stage 2016-08-18 10:06:15,014 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' locally acquired 2016-08-18 10:06:15,014 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure (snapshot_1471539974579_ns3_test-14715399571412) in zk 2016-08-18 10:06:15,014 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' starting 'acquire' stage 2016-08-18 10:06:15,014 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' locally acquired 2016-08-18 10:06:15,014 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure (snapshot_1471539974579_ns3_test-14715399571412) in zk 2016-08-18 10:06:15,015 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,015 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,015 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,015 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,015 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:15,016 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:15,016 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,016 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:15,016 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,016 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:15,016 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:15,016 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,016 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,017 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,017 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:15,017 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:15,017 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure 'snapshot_1471539974579_ns3_test-14715399571412' on coordinator 2016-08-18 10:06:15,018 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@4d940959[Count = 0] remaining members to acquire global barrier 2016-08-18 10:06:15,018 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(212): Procedure 'snapshot_1471539974579_ns3_test-14715399571412' starting 'in-barrier' execution. 2016-08-18 10:06:15,018 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,018 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,018 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,018 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,018 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,018 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,018 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,018 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:15,019 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:15,018 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,019 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,019 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 10:06:15,019 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' received 'reached' from coordinator. 2016-08-18 10:06:15,019 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:15,019 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-18 10:06:15,019 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-18 10:06:15,019 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:15,019 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. started... 2016-08-18 10:06:15,019 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,020 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool25-thread-1] snapshot.SnapshotManifest(203): Storing 'ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7.' region-info for snapshot. 2016-08-18 10:06:15,020 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,020 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool25-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-18 10:06:15,020 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool25-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [] hfiles 2016-08-18 10:06:15,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:15,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:15,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,022 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,022 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,022 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,022 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' received 'reached' from coordinator. 2016-08-18 10:06:15,022 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' locally completed 2016-08-18 10:06:15,022 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539974579_ns3_test-14715399571412' completed for member '10.22.9.171,59396,1471539932179' in zk 2016-08-18 10:06:15,023 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' has notified controller of completion 2016-08-18 10:06:15,023 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:15,023 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' completed. 2016-08-18 10:06:15,027 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741866_1042{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 52 2016-08-18 10:06:15,199 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:15,200 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539974579_ns3_test-14715399571412 table=ns3:test-14715399571412 type=FLUSH }' is still in progress! 2016-08-18 10:06:15,200 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#4) Sleeping: 500ms while waiting for snapshot completion. 2016-08-18 10:06:15,433 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. completed. 2016-08-18 10:06:15,433 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool25-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:06:15,433 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-18 10:06:15,434 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-18 10:06:15,434 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,434 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' locally completed 2016-08-18 10:06:15,434 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539974579_ns3_test-14715399571412' completed for member '10.22.9.171,59399,1471539932874' in zk 2016-08-18 10:06:15,437 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,437 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' has notified controller of completion 2016-08-18 10:06:15,437 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,437 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:15,438 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:15,437 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:15,438 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539974579_ns3_test-14715399571412' completed. 2016-08-18 10:06:15,439 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:15,439 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,440 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,440 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,441 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:15,441 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:15,441 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,442 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,442 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,443 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1471539974579_ns3_test-14715399571412' member '10.22.9.171,59399,1471539932874': 2016-08-18 10:06:15,443 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59399,1471539932874' released barrier for procedure'snapshot_1471539974579_ns3_test-14715399571412', counting down latch. Waiting for 0 more 2016-08-18 10:06:15,443 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(221): Procedure 'snapshot_1471539974579_ns3_test-14715399571412' execution completed 2016-08-18 10:06:15,443 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,443 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 10:06:15,443 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,443 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 10:06:15,443 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,443 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1471539974579_ns3_test-14715399571412including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-18 10:06:15,444 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,444 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,444 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,444 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,445 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:15,445 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:15,445 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,445 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:15,445 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,445 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:15,445 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:15,445 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:15,445 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,445 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,446 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,446 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,446 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,447 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:15,447 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,447 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,447 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:15,447 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,447 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,448 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,448 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,448 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,449 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,449 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:15,449 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:15,449 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:15,450 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:15,450 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:15,450 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 because node does not exist (not an error) 2016-08-18 10:06:15,450 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,450 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:15,450 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:15,450 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:15,450 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-18 10:06:15,450 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:15,450 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:15,451 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,451 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,451 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,451 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,451 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:15,451 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:15,451 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:15,452 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,452 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59396,1471539932179 2016-08-18 10:06:15,452 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,452 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412/10.22.9.171,59399,1471539932874 2016-08-18 10:06:15,452 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,452 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,460 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741867_1043{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 346 2016-08-18 10:06:15,703 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:15,704 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539974579_ns3_test-14715399571412 table=ns3:test-14715399571412 type=FLUSH }' is still in progress! 2016-08-18 10:06:15,704 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#5) Sleeping: 857ms while waiting for snapshot completion. 2016-08-18 10:06:15,865 INFO [IPC Server handler 0 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741866_1042 127.0.0.1:59389 2016-08-18 10:06:15,873 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539974579_ns3_test-14715399571412 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,874 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1471539974579_ns3_test-14715399571412 of table ns3:test-14715399571412 completed 2016-08-18 10:06:15,874 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:15,877 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:test-14715399571412/write-master:593960000000001 2016-08-18 10:06:16,012 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741863_1039, blk_1073741866_1042] 2016-08-18 10:06:16,120 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:16,562 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:16,562 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1471539974579_ns3_test-14715399571412 table=ns3:test-14715399571412 type=FLUSH }' has completed, notifying client. 2016-08-18 10:06:16,562 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(478): Wrapped a SnapshotDescription snapshot_1471539976562_ns4_test-14715399571413 from backupContext to request snapshot for backup. 2016-08-18 10:06:16,564 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(567): Unable to delete snapshot_1471539976562_ns4_test-14715399571413 org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 'snapshot_1471539976562_ns4_test-14715399571413' doesn't exist on the filesystem at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:272) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:565) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:71) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:06:16,565 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(533): No existing snapshot, attempting snapshot... 2016-08-18 10:06:16,566 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(577): Table enabled, starting distributed snapshot. 2016-08-18 10:06:16,572 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:test-14715399571413/write-master:593960000000001 2016-08-18 10:06:16,573 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(579): Started snapshot: { ss=snapshot_1471539976562_ns4_test-14715399571413 table=ns4:test-14715399571413 type=FLUSH } 2016-08-18 10:06:16,573 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(162): Running FLUSH table snapshot snapshot_1471539976562_ns4_test-14715399571413 C_M_SNAPSHOT_TABLE on table ns4:test-14715399571413 2016-08-18 10:06:16,573 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(85): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1471539976562_ns4_test-14715399571413 table=ns4:test-14715399571413 type=FLUSH }'' to complete. (max 857 ms per retry) 2016-08-18 10:06:16,573 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#1) Sleeping: 100ms while waiting for snapshot completion. 2016-08-18 10:06:16,586 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741868_1044{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 75 2016-08-18 10:06:16,678 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:16,678 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539976562_ns4_test-14715399571413 table=ns4:test-14715399571413 type=FLUSH }' is still in progress! 2016-08-18 10:06:16,678 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#2) Sleeping: 200ms while waiting for snapshot completion. 2016-08-18 10:06:16,881 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:16,881 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539976562_ns4_test-14715399571413 table=ns4:test-14715399571413 type=FLUSH }' is still in progress! 2016-08-18 10:06:16,881 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#3) Sleeping: 300ms while waiting for snapshot completion. 2016-08-18 10:06:16,999 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:16,999 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1471539976562_ns4_test-14715399571413' 2016-08-18 10:06:16,999 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:16,999 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(204): Procedure 'snapshot_1471539976562_ns4_test-14715399571413' starting 'acquire' 2016-08-18 10:06:16,999 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1471539976562_ns4_test-14715399571413', kicking off acquire phase on members. 2016-08-18 10:06:17,000 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,000 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,002 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:17,002 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:17,002 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:17,003 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:17,002 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,003 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:17,003 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:17,003 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,003 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,003 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 10:06:17,003 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,004 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,004 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-18 10:06:17,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,004 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539976562_ns4_test-14715399571413 from table ns4:test-14715399571413 type FLUSH 2016-08-18 10:06:17,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 79 2016-08-18 10:06:17,004 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,004 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,005 DEBUG [main-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1471539976562_ns4_test-14715399571413 from table ns4:test-14715399571413 type FLUSH 2016-08-18 10:06:17,005 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' with timeout 300000ms 2016-08-18 10:06:17,005 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:17,005 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,005 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' starting 'acquire' stage 2016-08-18 10:06:17,006 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' locally acquired 2016-08-18 10:06:17,006 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure (snapshot_1471539976562_ns4_test-14715399571413) in zk 2016-08-18 10:06:17,005 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(157): Starting subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' with timeout 300000ms 2016-08-18 10:06:17,006 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2016-08-18 10:06:17,006 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(165): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' starting 'acquire' stage 2016-08-18 10:06:17,006 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' locally acquired 2016-08-18 10:06:17,006 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure (snapshot_1471539976562_ns4_test-14715399571413) in zk 2016-08-18 10:06:17,007 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,007 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,007 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,008 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,008 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:17,008 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:17,008 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,008 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,008 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:17,008 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:17,008 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(172): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:17,008 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,009 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,009 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,009 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:17,009 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:17,010 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure 'snapshot_1471539976562_ns4_test-14715399571413' on coordinator 2016-08-18 10:06:17,010 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@7fdd9ff4[Count = 0] remaining members to acquire global barrier 2016-08-18 10:06:17,010 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(212): Procedure 'snapshot_1471539976562_ns4_test-14715399571413' starting 'in-barrier' execution. 2016-08-18 10:06:17,010 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,010 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,010 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,011 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,011 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,011 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,011 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,011 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,011 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,011 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 10:06:17,011 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:17,011 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' received 'reached' from coordinator. 2016-08-18 10:06:17,011 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:17,011 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.FlushSnapshotSubprocedure(137): Flush Snapshot Tasks submitted for 1 regions 2016-08-18 10:06:17,011 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(84): Starting region operation on ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:17,011 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(316): Waiting for local region snapshots to finish. 2016-08-18 10:06:17,011 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:17,011 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Flush Snapshotting region ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. started... 2016-08-18 10:06:17,012 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,012 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,012 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool28-thread-1] snapshot.SnapshotManifest(203): Storing 'ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7.' region-info for snapshot. 2016-08-18 10:06:17,013 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,013 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool28-thread-1] snapshot.SnapshotManifest(208): Creating references for hfiles 2016-08-18 10:06:17,013 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool28-thread-1] snapshot.SnapshotManifest(217): Adding snapshot references for [] hfiles 2016-08-18 10:06:17,013 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:17,013 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:17,013 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,014 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,014 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,014 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,014 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(186): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' received 'reached' from coordinator. 2016-08-18 10:06:17,014 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' locally completed 2016-08-18 10:06:17,014 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539976562_ns4_test-14715399571413' completed for member '10.22.9.171,59396,1471539932179' in zk 2016-08-18 10:06:17,015 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' has notified controller of completion 2016-08-18 10:06:17,015 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:17,015 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' completed. 2016-08-18 10:06:17,023 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741869_1045{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 52 2016-08-18 10:06:17,184 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:17,185 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539976562_ns4_test-14715399571413 table=ns4:test-14715399571413 type=FLUSH }' is still in progress! 2016-08-18 10:06:17,185 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#4) Sleeping: 500ms while waiting for snapshot completion. 2016-08-18 10:06:17,430 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(104): ... Flush Snapshotting region ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. completed. 2016-08-18 10:06:17,430 DEBUG [rs(10.22.9.171,59399,1471539932874)-snapshot-pool28-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(107): Closing region operation on ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:06:17,430 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(327): Completed 1/1 local region snapshots. 2016-08-18 10:06:17,430 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(329): Completed 1 local region snapshots. 2016-08-18 10:06:17,430 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(361): cancelling 0 tasks for snapshot 10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,430 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' locally completed 2016-08-18 10:06:17,430 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'snapshot_1471539976562_ns4_test-14715399571413' completed for member '10.22.9.171,59399,1471539932874' in zk 2016-08-18 10:06:17,434 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,434 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(193): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' has notified controller of completion 2016-08-18 10:06:17,434 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:17,434 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,435 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:17,435 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:17,434 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1] procedure.Subprocedure(218): Subprocedure 'snapshot_1471539976562_ns4_test-14715399571413' completed. 2016-08-18 10:06:17,436 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:17,436 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,437 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,437 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,438 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:17,438 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:17,438 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,439 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,439 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,440 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'snapshot_1471539976562_ns4_test-14715399571413' member '10.22.9.171,59399,1471539932874': 2016-08-18 10:06:17,440 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59399,1471539932874' released barrier for procedure'snapshot_1471539976562_ns4_test-14715399571413', counting down latch. Waiting for 0 more 2016-08-18 10:06:17,440 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,440 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(221): Procedure 'snapshot_1471539976562_ns4_test-14715399571413' execution completed 2016-08-18 10:06:17,440 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,440 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 10:06:17,440 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 10:06:17,440 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,440 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure snapshot_1471539976562_ns4_test-14715399571413including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-08-18 10:06:17,441 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,441 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,442 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,442 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:17,442 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/online-snapshot 2016-08-18 10:06:17,442 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,442 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,442 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:17,442 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,442 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:17,442 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:17,442 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:17,442 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,442 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,443 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,443 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,443 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,443 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:17,444 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,444 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,444 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:17,444 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,444 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,445 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,445 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,445 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,445 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,446 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:17,446 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:17,446 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:17,446 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:17,446 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 because node does not exist (not an error) 2016-08-18 10:06:17,447 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:17,446 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.EnabledTableSnapshotHandler(96): Done waiting - online snapshot for snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,447 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:17,447 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2016-08-18 10:06:17,447 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:17,447 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifest(440): Convert to Single Snapshot Manifest 2016-08-18 10:06:17,447 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/online-snapshot/abort 2016-08-18 10:06:17,447 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2016-08-18 10:06:17,448 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/online-snapshot/acquired 2016-08-18 10:06:17,448 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59396,1471539932179 2016-08-18 10:06:17,448 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.SnapshotManifestV1(119): No regions under directory:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413/10.22.9.171,59399,1471539932874 2016-08-18 10:06:17,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,449 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,457 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741870_1046{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 346 2016-08-18 10:06:17,686 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:17,686 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(362): Snapshoting '{ ss=snapshot_1471539976562_ns4_test-14715399571413 table=ns4:test-14715399571413 type=FLUSH }' is still in progress! 2016-08-18 10:06:17,686 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(96): (#5) Sleeping: 857ms while waiting for snapshot completion. 2016-08-18 10:06:17,865 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741869_1045 127.0.0.1:59389 2016-08-18 10:06:17,873 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(256): Sentinel is done, just moving the snapshot from hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539976562_ns4_test-14715399571413 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,874 INFO [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(208): Snapshot snapshot_1471539976562_ns4_test-14715399571413 of table ns4:test-14715399571413 completed 2016-08-18 10:06:17,874 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] snapshot.TakeSnapshotHandler(221): Launching cleanup of working dir:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.hbase-snapshot/.tmp/snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:17,877 DEBUG [MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:test-14715399571413/write-master:593960000000001 2016-08-18 10:06:18,548 DEBUG [ProcedureExecutor-4] util.BackupServerUtil(102): Getting current status of snapshot ... 2016-08-18 10:06:18,548 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(359): Snapshot '{ ss=snapshot_1471539976562_ns4_test-14715399571413 table=ns4:test-14715399571413 type=FLUSH }' has completed, notifying client. 2016-08-18 10:06:18,654 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(577): snapshot copy for backup_1471539967737 2016-08-18 10:06:18,654 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(292): Snapshot copy is starting. 2016-08-18 10:06:18,671 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(304): There are 4 snapshots to be copied. 2016-08-18 10:06:18,672 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1471539969681_ns1_test-1471539957141 to hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/ 2016-08-18 10:06:18,690 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(302): Doing SNAPSHOT_COPY 2016-08-18 10:06:18,706 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:59388 inputRoot=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 2016-08-18 10:06:18,722 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:59388 outputRoot=hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141 2016-08-18 10:06:18,724 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-18 10:06:18,739 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741871_1047{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 73 2016-08-18 10:06:19,017 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741869_1045] 2016-08-18 10:06:19,158 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741872_1048{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 382 2016-08-18 10:06:19,564 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:06:20,177 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5028916766483090107.jar 2016-08-18 10:06:25,919 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-64691377522679891.jar 2016-08-18 10:06:26,126 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:27,059 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7912687302491691400.jar 2016-08-18 10:06:27,083 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7838395551230925228.jar 2016-08-18 10:06:31,325 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-2184735762487917255.jar 2016-08-18 10:06:31,325 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:06:31,326 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:06:31,326 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:06:31,326 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:06:31,327 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:06:31,327 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:06:31,330 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:31,330 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:31,330 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:31,331 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:31,331 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:31,332 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:31,332 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:31,414 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:06:31,677 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1471539969681_ns1_test-1471539957141' hfile list 2016-08-18 10:06:31,685 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(629): export split=0 size=8.1 K 2016-08-18 10:06:32,433 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(181): Using bufferSize=128 M 2016-08-18 10:06:32,463 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(414): copy completed for input=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb output=hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/archive/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb 2016-08-18 10:06:32,464 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(415): size=8292 (8.1 K) time=0sec 7.908M/sec 2016-08-18 10:06:32,473 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741873_1049{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:06:33,345 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-18 10:06:33,347 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-18 10:06:33,356 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:33,357 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1471539969681_ns1_test-1471539957141 finished. 2016-08-18 10:06:33,357 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1471539974579_ns3_test-14715399571412 to hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/ 2016-08-18 10:06:33,357 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(302): Doing SNAPSHOT_COPY 2016-08-18 10:06:33,372 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:59388 inputRoot=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 2016-08-18 10:06:33,387 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:59388 outputRoot=hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412 2016-08-18 10:06:33,389 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-18 10:06:33,404 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741874_1050{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 75 2016-08-18 10:06:33,822 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741875_1051{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 346 2016-08-18 10:06:34,229 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:06:34,451 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7710276428983229857.jar 2016-08-18 10:06:34,493 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:06:34,493 DEBUG [10.22.9.171,59399,1471539932874_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:06:34,878 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1cf02d9c connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:34,883 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1cf02d9c0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:34,884 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4507a2a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:06:34,885 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:06:34,885 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:06:34,885 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1cf02d9c-0x1569e9d5541000f connected 2016-08-18 10:06:34,885 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 10:06:34,889 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:34,889 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59553; # active connections: 8 2016-08-18 10:06:34,890 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:34,892 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59553 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:34,896 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:34,896 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59554; # active connections: 4 2016-08-18 10:06:34,896 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:34,897 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59554 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:34,900 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:06:34,901 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:06:34,901 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:34,902 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:34,902 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541000f 2016-08-18 10:06:34,902 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:06:34,903 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59554 because read count=-1. Number of active connections: 4 2016-08-18 10:06:34,903 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-1257755310) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:06:34,903 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-587151026) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:06:34,903 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59553 because read count=-1. Number of active connections: 8 2016-08-18 10:06:35,687 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6026214171143941441.jar 2016-08-18 10:06:36,083 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-479940200024649574.jar 2016-08-18 10:06:36,105 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3818056458221362489.jar 2016-08-18 10:06:36,129 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:37,121 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/test-14715399571412/b3b808604c7a4b394d3cdc0636a4d8d7/f 2016-08-18 10:06:37,122 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/test-14715399571413/12e7d6010d0ab46d9061da5bf6f5e4b7/f 2016-08-18 10:06:37,129 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/info 2016-08-18 10:06:37,129 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/meta 2016-08-18 10:06:37,130 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/table 2016-08-18 10:06:37,130 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/session 2016-08-18 10:06:37,131 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/info 2016-08-18 10:06:37,430 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3664992075504310085.jar 2016-08-18 10:06:37,431 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:06:37,431 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:06:37,432 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:06:37,432 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:06:37,432 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:06:37,433 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:06:37,433 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:37,433 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:37,434 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:37,434 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:37,434 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:37,435 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:37,435 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:37,484 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:06:37,746 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1471539974579_ns3_test-14715399571412' hfile list 2016-08-18 10:06:39,187 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-18 10:06:39,191 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-18 10:06:39,197 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:39,197 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1471539974579_ns3_test-14715399571412 finished. 2016-08-18 10:06:39,197 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1471539972595_ns2_test-14715399571411 to hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/ 2016-08-18 10:06:39,197 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(302): Doing SNAPSHOT_COPY 2016-08-18 10:06:39,212 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:59388 inputRoot=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 2016-08-18 10:06:39,225 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:59388 outputRoot=hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411 2016-08-18 10:06:39,226 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-18 10:06:39,241 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741876_1052{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 75 2016-08-18 10:06:39,656 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741877_1053{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 384 2016-08-18 10:06:40,065 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:06:40,280 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7920485311132696635.jar 2016-08-18 10:06:40,349 DEBUG [10.22.9.171,59441,1471539940207_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:06:40,377 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:06:40,626 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:06:40,626 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/meta 2016-08-18 10:06:40,627 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:06:40,627 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/session 2016-08-18 10:06:40,628 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info 2016-08-18 10:06:41,470 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5205088870635659471.jar 2016-08-18 10:06:41,856 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7094247977537524692.jar 2016-08-18 10:06:41,876 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5222994356486820101.jar 2016-08-18 10:06:43,075 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-851233542888942761.jar 2016-08-18 10:06:43,076 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:06:43,076 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:06:43,076 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:06:43,076 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:06:43,077 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:06:43,077 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:06:43,077 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:43,078 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:43,078 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:43,078 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:43,079 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:43,079 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:43,079 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:43,127 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:06:43,390 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1471539972595_ns2_test-14715399571411' hfile list 2016-08-18 10:06:43,392 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(629): export split=0 size=8.1 K 2016-08-18 10:06:43,832 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(181): Using bufferSize=128 M 2016-08-18 10:06:43,895 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(414): copy completed for input=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e output=hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/archive/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e 2016-08-18 10:06:43,895 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(415): size=8292 (8.1 K) time=0sec 3.954M/sec 2016-08-18 10:06:43,906 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741878_1054{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:06:44,793 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-18 10:06:44,795 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-18 10:06:44,807 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:44,808 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1471539972595_ns2_test-14715399571411 finished. 2016-08-18 10:06:44,808 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(317): Copy snapshot snapshot_1471539976562_ns4_test-14715399571413 to hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/ 2016-08-18 10:06:44,808 DEBUG [ProcedureExecutor-4] mapreduce.MapReduceBackupCopyService(302): Doing SNAPSHOT_COPY 2016-08-18 10:06:44,821 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(929): inputFs=hdfs://localhost:59388 inputRoot=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179 2016-08-18 10:06:44,835 DEBUG [ProcedureExecutor-4] snapshot.ExportSnapshot(933): outputFs=hdfs://localhost:59388 outputRoot=hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413 2016-08-18 10:06:44,837 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(977): Copy Snapshot Manifest 2016-08-18 10:06:44,850 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741879_1055{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 75 2016-08-18 10:06:45,266 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741880_1056{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 346 2016-08-18 10:06:45,669 WARN [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:06:45,877 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-4368570763474230332.jar 2016-08-18 10:06:46,131 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:47,048 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-8594879696096508602.jar 2016-08-18 10:06:47,441 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7778052394884831010.jar 2016-08-18 10:06:47,464 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3161997959186804147.jar 2016-08-18 10:06:48,655 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6800612478391893055.jar 2016-08-18 10:06:48,655 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:06:48,656 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:06:48,656 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:06:48,656 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:06:48,657 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:06:48,657 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:06:48,657 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:48,658 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:48,658 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:48,658 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.LongWritable, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:48,659 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.io.Text, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar 2016-08-18 10:06:48,659 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:48,659 DEBUG [ProcedureExecutor-4] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:06:48,705 WARN [ProcedureExecutor-4] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:06:48,967 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(542): Loading Snapshot 'snapshot_1471539976562_ns4_test-14715399571413' hfile list 2016-08-18 10:06:50,352 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1007): Finalize the Snapshot Export 2016-08-18 10:06:50,355 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1018): Verify snapshot integrity 2016-08-18 10:06:50,361 INFO [ProcedureExecutor-4] snapshot.ExportSnapshot(1022): Export Completed: snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:50,361 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(326): Snapshot copy snapshot_1471539976562_ns4_test-14715399571413 finished. 2016-08-18 10:06:50,361 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(458): Add incremental backup table set to hbase:backup. ROOT=hdfs://localhost:59388/backupUT tables [ns1:test-1471539957141 ns3:test-14715399571412 ns2:test-14715399571411 ns4:test-14715399571413] 2016-08-18 10:06:50,361 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns1:test-1471539957141 2016-08-18 10:06:50,361 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns3:test-14715399571412 2016-08-18 10:06:50,361 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns2:test-14715399571411 2016-08-18 10:06:50,361 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(461): ns4:test-14715399571413 2016-08-18 10:06:50,363 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:50,467 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(337): write RS log time stamps to hbase:backup for tables [ns1:test-1471539957141,ns3:test-14715399571412,ns2:test-14715399571411,ns4:test-14715399571413] 2016-08-18 10:06:50,474 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:50,475 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:59388/backupUT 2016-08-18 10:06:50,479 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(205): write backup start code to hbase:backup 1471539936418 2016-08-18 10:06:50,480 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:50,484 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:06:50,484 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471539967737 2016-08-18 10:06:50,484 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-18 10:06:50,490 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471539967737 FULL 2016-08-18 10:06:50,502 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741881_1057{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 170 2016-08-18 10:06:50,909 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:06:50,909 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:06:50,909 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471539967737 2016-08-18 10:06:50,909 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-18 10:06:50,910 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471539967737 FULL 2016-08-18 10:06:50,918 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741882_1058{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 172 2016-08-18 10:06:51,323 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:06:51,323 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:06:51,323 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471539967737 2016-08-18 10:06:51,323 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-18 10:06:51,323 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471539967737 FULL 2016-08-18 10:06:51,333 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741883_1059{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 172 2016-08-18 10:06:51,740 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:06:51,740 DEBUG [ProcedureExecutor-4] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:06:51,740 DEBUG [ProcedureExecutor-4] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471539967737 2016-08-18 10:06:51,741 DEBUG [ProcedureExecutor-4] impl.BackupManager(309): Current backup is a full backup, no direct ancestor for it. 2016-08-18 10:06:51,741 DEBUG [ProcedureExecutor-4] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471539967737 FULL 2016-08-18 10:06:51,750 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741884_1060{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 172 2016-08-18 10:06:52,154 INFO [ProcedureExecutor-4] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:06:52,155 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(439): in-fly convert code here, provided by future jira 2016-08-18 10:06:52,155 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(447): Backup backup_1471539967737 finished: type=FULL,tablelist=ns1:test-1471539957141;ns3:test-14715399571412;ns2:test-14715399571411;ns4:test-14715399571413,targetRootDir=hdfs://localhost:59388/backupUT,startts=1471539967995,completets=1471540010481,bytescopied=0 2016-08-18 10:06:52,155 DEBUG [ProcedureExecutor-4] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471539967737 set status=COMPLETE 2016-08-18 10:06:52,156 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:52,157 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(154): Trying to delete snapshot for full backup. 2016-08-18 10:06:52,157 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:52,160 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1471539969681_ns1_test-1471539957141 2016-08-18 10:06:52,161 INFO [IPC Server handler 5 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741857_1033 127.0.0.1:59389 2016-08-18 10:06:52,161 INFO [IPC Server handler 5 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741860_1036 127.0.0.1:59389 2016-08-18 10:06:52,162 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1471539969681_ns1_test-1471539957141 for backup backup_1471539967737 succeeded. 2016-08-18 10:06:52,162 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:52,164 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1471539974579_ns3_test-14715399571412 2016-08-18 10:06:52,165 INFO [IPC Server handler 6 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741865_1041 127.0.0.1:59389 2016-08-18 10:06:52,165 INFO [IPC Server handler 6 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741867_1043 127.0.0.1:59389 2016-08-18 10:06:52,165 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1471539974579_ns3_test-14715399571412 for backup backup_1471539967737 succeeded. 2016-08-18 10:06:52,165 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:52,168 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1471539972595_ns2_test-14715399571411 2016-08-18 10:06:52,168 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741861_1037 127.0.0.1:59389 2016-08-18 10:06:52,169 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741864_1040 127.0.0.1:59389 2016-08-18 10:06:52,169 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1471539972595_ns2_test-14715399571411 for backup backup_1471539967737 succeeded. 2016-08-18 10:06:52,169 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:52,171 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1471539976562_ns4_test-14715399571413 2016-08-18 10:06:52,172 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741868_1044 127.0.0.1:59389 2016-08-18 10:06:52,172 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741870_1046 127.0.0.1:59389 2016-08-18 10:06:52,172 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1471539976562_ns4_test-14715399571413 for backup backup_1471539967737 succeeded. 2016-08-18 10:06:52,173 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(462): Backup backup_1471539967737 completed. 2016-08-18 10:06:52,282 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:593960000000001 2016-08-18 10:06:52,282 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 44.4080sec: FullTableBackupProcedure (targetRootDir=hdfs://localhost:59388/backupUT; backupId=backup_1471539967737; tables=ns1:test-1471539957141,ns2:test-14715399571411,ns3:test-14715399571412,ns4:test-14715399571413) id=13 state=FINISHED 2016-08-18 10:06:55,045 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741857_1033, blk_1073741860_1036, blk_1073741861_1037, blk_1073741864_1040, blk_1073741865_1041, blk_1073741867_1043, blk_1073741868_1044, blk_1073741870_1046] 2016-08-18 10:06:56,133 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 10:06:56,134 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1471539967737 2016-08-18 10:06:56,138 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:56,138 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59579; # active connections: 4 2016-08-18 10:06:56,139 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:56,139 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59579 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:56,141 DEBUG [main] backup.TestIncrementalBackup(64): writing 99 rows to ns1:test-1471539957141 2016-08-18 10:06:56,149 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:56,149 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59580; # active connections: 5 2016-08-18 10:06:56,150 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:56,150 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59580 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:56,150 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,153 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,155 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,156 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,158 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,159 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,161 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,162 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,164 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,166 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,167 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,168 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,170 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,171 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,173 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,174 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,176 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,177 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,178 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,180 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,181 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,183 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,184 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,185 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,187 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,189 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,190 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,192 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,193 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,195 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,196 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,198 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,199 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,201 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,202 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,203 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,204 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,206 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,207 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,209 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,210 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,211 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,213 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,214 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,215 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,217 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,218 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,220 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,221 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,223 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,225 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,226 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,228 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,229 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,231 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,233 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,234 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,235 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,237 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,238 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,240 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,241 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,242 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,244 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,245 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,246 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,248 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,250 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,251 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,253 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,254 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,256 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,257 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,259 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,261 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,262 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,264 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,265 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,267 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,268 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,269 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,271 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,272 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,273 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,274 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,276 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,277 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,278 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,279 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,281 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,282 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,284 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,285 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,286 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,288 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,290 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,292 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,293 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,295 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,325 DEBUG [main] backup.TestIncrementalBackup(75): written 99 rows to ns1:test-1471539957141 2016-08-18 10:06:56,329 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:56,331 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:56,334 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:56,335 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:56,337 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:56,350 DEBUG [main] backup.TestIncrementalBackup(87): written 5 rows to ns2:test-14715399571411 2016-08-18 10:06:56,352 INFO [main] util.BackupClientUtil(105): Using existing backup root dir: hdfs://localhost:59388/backupUT 2016-08-18 10:06:56,356 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-18 10:06:56,357 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.HMaster(2641): Incremental backup for the following table set: [ns3:test-14715399571412, ns4:test-14715399571413, ns1:test-1471539957141, ns2:test-14715399571411] 2016-08-18 10:06:56,362 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4da8940a connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:56,367 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4da8940a0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:56,367 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4302eb3d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:06:56,367 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:06:56,368 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:06:56,368 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] backup.BackupInfo(125): CreateBackupContext: 4 ns3:test-14715399571412 2016-08-18 10:06:56,368 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x4da8940a-0x1569e9d55410010 connected 2016-08-18 10:06:56,477 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:59388/backupUT; backupId=backup_1471540016356; tables=ns3:test-14715399571412,ns4:test-14715399571413,ns1:test-1471539957141,ns2:test-14715399571411) id=14 state=RUNNABLE:PREPARE_INCREMENTAL added to the store. 2016-08-18 10:06:56,480 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:06:56,480 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:593960000000002 2016-08-18 10:06:56,481 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(130): Backup backup_1471540016356 started at 1471540016481. 2016-08-18 10:06:56,481 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471540016356 set status=RUNNING 2016-08-18 10:06:56,485 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:56,485 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59585; # active connections: 8 2016-08-18 10:06:56,485 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:56,486 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59585 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:56,489 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:56,489 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59586; # active connections: 6 2016-08-18 10:06:56,490 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:56,490 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59586 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:56,491 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:56,492 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(134): Backup session backup_1471540016356 has been started. 2016-08-18 10:06:56,492 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-18 10:06:56,493 DEBUG [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(216): For incremental backup, current table set is [ns3:test-14715399571412, ns4:test-14715399571413, ns1:test-1471539957141, ns2:test-14715399571411] 2016-08-18 10:06:56,495 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(180): read backup start code from hbase:backup 2016-08-18 10:06:56,495 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:59388/backupUT 2016-08-18 10:06:56,498 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(93): StartCode 1471539936418for backupID backup_1471540016356 2016-08-18 10:06:56,498 INFO [ProcedureExecutor-5] impl.IncrementalBackupManager(104): Execute roll log procedure for incremental backup ... 2016-08-18 10:06:56,503 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:06:56,503 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59587; # active connections: 9 2016-08-18 10:06:56,504 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:56,504 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59587 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:56,506 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(652): Client=tyu//10.22.9.171 procedure request for: rolllog-proc 2016-08-18 10:06:56,507 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2016-08-18 10:06:56,507 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2016-08-18 10:06:56,507 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:06:56,507 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2016-08-18 10:06:56,507 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2016-08-18 10:06:56,508 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:56,508 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:56,508 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:56,508 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:56,508 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:56,508 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,508 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:56,509 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:56,509 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:56,509 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:56,509 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,509 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,509 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:56,509 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:56,509 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,509 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 10:06:56,509 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:56,510 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 10:06:56,510 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:56,510 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 10:06:56,510 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 10:06:56,510 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:56,510 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 10:06:56,510 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 10:06:56,510 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 10:06:56,510 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 10:06:56,510 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 10:06:56,510 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:06:56,510 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 10:06:56,511 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 10:06:56,512 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,512 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,512 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,512 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,512 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,512 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:56,512 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:56,512 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,513 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,513 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:56,513 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:56,513 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,513 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:06:56,513 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:56,513 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,513 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@69c362fd[Count = 1] remaining members to acquire global barrier 2016-08-18 10:06:56,514 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,514 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:56,514 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:56,515 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:56,515 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:56,515 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,515 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,516 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:56,516 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:56,516 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 10:06:56,516 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@69c362fd[Count = 0] remaining members to acquire global barrier 2016-08-18 10:06:56,516 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2016-08-18 10:06:56,516 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,517 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 10:06:56,517 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,517 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:56,517 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:56,517 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 10:06:56,517 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,518 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 10:06:56,517 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,517 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 10:06:56,518 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:56,518 INFO [rs(10.22.9.171,59396,1471539932179)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471539968108 on 10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,518 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,518 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 10:06:56,518 INFO [rs(10.22.9.171,59399,1471539932874)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471539968543 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,518 DEBUG [master//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 10:06:56,518 DEBUG [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 10:06:56,518 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:56,518 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:56,519 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:56,519 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:56,520 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:56,520 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:56,521 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:56,521 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:06:56,521 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:06:56,525 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,525 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:06:56,526 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:56,526 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:06:56,530 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741852_1028{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:06:56,530 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741851_1027{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 11592 2016-08-18 10:06:56,587 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:06:56,795 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:06:56,932 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:06:56,933 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 with entries=101, filesize=11.32 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:06:56,933 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:06:56,933 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:56,937 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:06:56,938 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:06:56,941 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:06:56,941 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:56,943 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:56,943 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:06:56,946 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741854_1030{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:06:56,946 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741853_1029{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1196 2016-08-18 10:06:57,100 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:06:57,352 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 with entries=7, filesize=1.17 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:06:57,352 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:06:57,353 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:57,353 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:06:57,357 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:06:57,361 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:06:57,363 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:06:57,366 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741855_1031{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:06:57,370 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 852 2016-08-18 10:06:57,370 INFO [rs(10.22.9.171,59396,1471539932179)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471540016518 on 10.22.9.171,59396,1471539932179 2016-08-18 10:06:57,371 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool29-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:06:57,373 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool29-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 10:06:57,374 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:57,375 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 10:06:57,375 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,59396,1471539932179' in zk 2016-08-18 10:06:57,377 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:57,377 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 10:06:57,377 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:57,377 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:57,377 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:57,377 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:57,377 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:57,377 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:57,377 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 10:06:57,378 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:57,378 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:57,379 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:57,379 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:57,379 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:57,379 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:57,380 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:57,380 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:57,380 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,59396,1471539932179': 2016-08-18 10:06:57,381 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59396,1471539932179' released barrier for procedure'rolllog', counting down latch. Waiting for 1 more 2016-08-18 10:06:57,602 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:06:57,774 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:06:57,775 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:06:57,780 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:06:57,785 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:57,786 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:57,790 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741856_1032{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 4383 2016-08-18 10:06:58,195 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 with entries=8, filesize=4.28 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:06:58,210 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 1692 2016-08-18 10:06:58,211 INFO [rs(10.22.9.171,59399,1471539932874)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471540017355 on 10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,211 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool30-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:06:58,213 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool30-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 10:06:58,215 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:06:58,216 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 10:06:58,217 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,59399,1471539932874' in zk 2016-08-18 10:06:58,220 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,220 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 10:06:58,220 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,220 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,220 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,220 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:58,220 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:58,220 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:58,220 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 10:06:58,222 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:58,222 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:58,222 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,223 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,223 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:58,223 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:58,224 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:58,224 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,224 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,225 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,59399,1471539932874': 2016-08-18 10:06:58,225 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59399,1471539932874' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2016-08-18 10:06:58,225 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2016-08-18 10:06:58,225 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 10:06:58,225 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 10:06:58,225 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:rolllog 2016-08-18 10:06:58,225 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-18 10:06:58,226 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,226 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,226 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,226 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,226 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,226 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,227 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:06:58,227 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:06:58,227 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,227 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,227 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:06:58,227 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:06:58,227 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:06:58,227 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,227 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:06:58,227 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,228 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:58,228 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,228 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,228 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,229 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:06:58,229 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,229 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:58,229 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:06:58,229 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:06:58,230 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,230 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,230 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:58,230 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:58,230 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:58,231 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:06:58,231 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.LogRollMasterProcedureManager(116): Done waiting - exec procedure for rolllog 2016-08-18 10:06:58,231 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.LogRollMasterProcedureManager(117): Distributed roll log procedure is successful! 2016-08-18 10:06:58,231 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:06:58,231 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/rolllog-proc/abort/rolllog because node does not exist (not an error) 2016-08-18 10:06:58,231 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:06:58,231 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:06:58,231 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:06:58,231 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:06:58,231 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:06:58,232 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:06:58,232 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:06:58,232 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:06:58,233 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2481): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 857 ms per retry) 2016-08-18 10:06:58,233 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2490): (#1) Sleeping: 100ms while waiting for procedure completion. 2016-08-18 10:06:58,334 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2496): Getting current status of procedure from master... 2016-08-18 10:06:58,340 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(904): Checking to see if procedure from request:rolllog-proc is done 2016-08-18 10:06:58,341 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:06:58,344 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(215): In getLogFilesForNewBackup() olderTimestamps: {10.22.9.171:59399=1471539936418, 10.22.9.171:59396=1471539936418} newestTimestamps: {10.22.9.171:59399=1471539968543, 10.22.9.171:59396=1471539968108} 2016-08-18 10:06:58,348 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:06:58,348 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:06:58,348 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): not excluding hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 1471539937974 <= 1471539968108 2016-08-18 10:06:58,349 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:06:58,349 WARN [ProcedureExecutor-5] wal.DefaultWALProvider(349): Cannot parse a server name from path=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta; Not a host:port pair: 10.22.9.171,59396,1471539932179.meta 2016-08-18 10:06:58,349 WARN [ProcedureExecutor-5] util.BackupServerUtil(237): Skip log file (can't parse): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): not excluding hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 1471539940130 <= 1471539968543 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): not excluding hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 1471539968108 <= 1471539968543 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:58,350 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): not excluding hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 1471539968528 <= 1471539968543 2016-08-18 10:06:58,351 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:06:58,352 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 1471539936418 <= 1471539936418 2016-08-18 10:06:58,352 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(325): newest log hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 host: 10.22.9.171:59396 newTimestamp: 1471539968108 2016-08-18 10:06:58,352 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 1471539936418 <= 1471539936418 2016-08-18 10:06:58,352 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(500): get WAL files from hbase:backup 2016-08-18 10:06:58,357 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471539967737/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:06:58,357 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471539967737/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(313): setting incr backup file list 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:06:58,358 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:06:58,467 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x756b165d connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:58,471 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x756b165d0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:58,472 DEBUG [ProcedureExecutor-5] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79e331ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:06:58,472 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:06:58,472 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x756b165d-0x1569e9d55410011 connected 2016-08-18 10:06:58,472 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:06:58,475 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:06:58,475 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59595; # active connections: 10 2016-08-18 10:06:58,476 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:06:58,476 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59595 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:06:58,477 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns1:test-1471539957141 2016-08-18 10:06:58,489 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741891_1067{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 294 2016-08-18 10:06:58,608 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:06:58,904 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540016356/ns1/test-1471539957141/.tabledesc/.tableinfo.0000000001 2016-08-18 10:06:58,905 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:06:58,906 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x756b165d connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:58,910 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x756b165d0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:58,912 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns1:test-1471539957141 2016-08-18 10:06:58,912 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x756b165d-0x1569e9d55410012 connected 2016-08-18 10:06:58,919 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741892_1068{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 49 2016-08-18 10:06:59,326 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns1:test-1471539957141 2016-08-18 10:06:59,328 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns3:test-14715399571412 2016-08-18 10:06:59,341 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741893_1069{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 296 2016-08-18 10:06:59,746 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540016356/ns3/test-14715399571412/.tabledesc/.tableinfo.0000000001 2016-08-18 10:06:59,747 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:06:59,747 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x756b165d connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:06:59,751 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x756b165d0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:06:59,757 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns3:test-14715399571412 2016-08-18 10:06:59,757 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x756b165d-0x1569e9d55410013 connected 2016-08-18 10:06:59,765 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741894_1070{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 50 2016-08-18 10:07:00,169 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns3:test-14715399571412 2016-08-18 10:07:00,171 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns2:test-14715399571411 2016-08-18 10:07:00,184 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741895_1071{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 295 2016-08-18 10:07:00,594 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540016356/ns2/test-14715399571411/.tabledesc/.tableinfo.0000000001 2016-08-18 10:07:00,595 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:07:00,595 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x756b165d connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:00,599 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x756b165d0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:00,601 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns2:test-14715399571411 2016-08-18 10:07:00,601 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x756b165d-0x1569e9d55410014 connected 2016-08-18 10:07:00,609 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741896_1072{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 50 2016-08-18 10:07:00,613 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:07:01,013 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns2:test-14715399571411 2016-08-18 10:07:01,016 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns4:test-14715399571413 2016-08-18 10:07:01,034 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741897_1073{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 296 2016-08-18 10:07:01,445 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540016356/ns4/test-14715399571413/.tabledesc/.tableinfo.0000000001 2016-08-18 10:07:01,446 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:07:01,446 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x756b165d connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:01,450 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x756b165d0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:01,451 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns4:test-14715399571413 2016-08-18 10:07:01,451 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x756b165d-0x1569e9d55410015 connected 2016-08-18 10:07:01,458 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741898_1074{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 50 2016-08-18 10:07:01,862 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns4:test-14715399571413 2016-08-18 10:07:01,863 INFO [ProcedureExecutor-5] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410011 2016-08-18 10:07:01,866 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:01,867 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(125): Incremental copy is starting. 2016-08-18 10:07:01,867 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (992202727) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:01,867 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59595 because read count=-1. Number of active connections: 10 2016-08-18 10:07:01,872 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(308): Doing COPY_TYPE_DISTCP 2016-08-18 10:07:01,901 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(318): DistCp options: [hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152, hdfs://localhost:59388/backupUT/backup_1471540016356/WALs] 2016-08-18 10:07:02,126 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741899_1075{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 981 2016-08-18 10:07:02,560 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741900_1076{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1629 2016-08-18 10:07:02,988 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741901_1077{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 11592 2016-08-18 10:07:03,413 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741902_1078{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1196 2016-08-18 10:07:03,835 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741903_1079{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:07:04,262 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741904_1080{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:07:04,616 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:07:04,688 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741905_1081{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 10957 2016-08-18 10:07:05,116 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741906_1082{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 11059 2016-08-18 10:07:05,622 INFO [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(247): Progress: 100.0% subTask: 1.0 mapProgress: 1.0 2016-08-18 10:07:05,623 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471540016356 set status=RUNNING 2016-08-18 10:07:05,625 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:07:05,626 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(140): Backup progress data "100%" has been updated to hbase:backup for backup_1471540016356 2016-08-18 10:07:05,626 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(256): Backup progress data updated to hbase:backup: "Progress: 100.0% - 37596 bytes copied." 2016-08-18 10:07:05,627 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(271): DistCp job-id: job_local1372242507_0005 completed: true true 2016-08-18 10:07:05,633 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(274): Counters: 23 File System Counters FILE: Number of bytes read=94407220 FILE: Number of bytes written=94695617 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=94138 HDFS: Number of bytes written=2273637 HDFS: Number of read operations=638 HDFS: Number of large read operations=0 HDFS: Number of write operations=289 Map-Reduce Framework Map input records=8 Map output records=0 Input split bytes=264 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=1214251008 File Input Format Counters Bytes Read=2674 File Output Format Counters Bytes Written=0 org.apache.hadoop.tools.mapred.CopyMapper$Counter BYTESCOPIED=37596 BYTESEXPECTED=37596 COPY=8 2016-08-18 10:07:05,634 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(326): list of hdfs://localhost:59388/backupUT/backup_1471540016356/WALs for distcp 0 2016-08-18 10:07:05,637 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024240; access_time=1471540023826; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,637 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471540022532; access_time=1471540022117; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,637 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024666; access_time=1471540024253; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,637 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471540022966; access_time=1471540022551; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,637 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471540025094; access_time=1471540024679; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,637 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471540023391; access_time=1471540022979; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,638 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471540025521; access_time=1471540025107; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,638 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528; isDirectory=false; length=1196; replication=1; blocksize=134217728; modification_time=1471540023814; access_time=1471540023404; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:05,642 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(176): Incremental copy from hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 to hdfs://localhost:59388/backupUT/backup_1471540016356/WALs finished. 2016-08-18 10:07:05,642 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(480): add WAL files to hbase:backup: backup_1471540016356 hdfs://localhost:59388/backupUT files [hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152] 2016-08-18 10:07:05,642 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:07:05,643 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:07:05,643 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:07:05,643 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:07:05,643 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:07:05,643 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:07:05,643 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:07:05,643 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:07:05,645 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:07:05,751 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:59388/backupUT 2016-08-18 10:07:05,756 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(337): write RS log time stamps to hbase:backup for tables [ns1:test-1471539957141,ns3:test-14715399571412,ns2:test-14715399571411,ns4:test-14715399571413] 2016-08-18 10:07:05,758 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:07:05,759 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:59388/backupUT 2016-08-18 10:07:05,763 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(205): write backup start code to hbase:backup 1471539968108 2016-08-18 10:07:05,764 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:07:05,765 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:07:05,765 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:05,765 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:05,765 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:05,770 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:05,770 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540016356 INCREMENTAL 2016-08-18 10:07:05,770 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:05,770 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:05,770 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:05,773 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:05,781 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741907_1083{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 811 2016-08-18 10:07:06,183 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540016356/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:06,184 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:07:06,184 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:06,184 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:06,184 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:06,188 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:06,189 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540016356 INCREMENTAL 2016-08-18 10:07:06,189 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:06,189 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:06,189 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:06,192 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:06,200 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741908_1084{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 814 2016-08-18 10:07:06,602 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540016356/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:07:06,602 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:07:06,602 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:06,602 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:06,603 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:06,607 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:06,607 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540016356 INCREMENTAL 2016-08-18 10:07:06,607 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:06,607 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:06,607 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:06,610 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:06,618 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741909_1085{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 814 2016-08-18 10:07:07,026 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540016356/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:07:07,026 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:07:07,026 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:07,026 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:07,026 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:07,030 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:07,031 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540016356 INCREMENTAL 2016-08-18 10:07:07,031 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:07,031 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:07,031 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:07,034 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:07,042 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741910_1086{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 814 2016-08-18 10:07:07,445 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540016356/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:07:07,445 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 4 tables exist in table set. 2016-08-18 10:07:07,445 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540016356 2016-08-18 10:07:07,445 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:07:07,446 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:07:07,450 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 10:07:07,450 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540016356 INCREMENTAL 2016-08-18 10:07:07,461 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741911_1087{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1052 2016-08-18 10:07:07,866 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:07:07,866 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(439): in-fly convert code here, provided by future jira 2016-08-18 10:07:07,867 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(447): Backup backup_1471540016356 finished: type=INCREMENTAL,tablelist=ns1:test-1471539957141;ns3:test-14715399571412;ns2:test-14715399571411;ns4:test-14715399571413,targetRootDir=hdfs://localhost:59388/backupUT,startts=1471540016481,completets=1471540025765,bytescopied=0 2016-08-18 10:07:07,867 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471540016356 set status=COMPLETE 2016-08-18 10:07:07,869 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:07:07,872 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(462): Backup backup_1471540016356 completed. 2016-08-18 10:07:07,977 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:593960000000002 2016-08-18 10:07:07,978 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 11.5040sec: IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:59388/backupUT; backupId=backup_1471540016356; tables=ns3:test-14715399571412,ns4:test-14715399571413,ns1:test-1471539957141,ns2:test-14715399571411) id=14 state=FINISHED 2016-08-18 10:07:14,621 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 10:07:14,622 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1471540016356 2016-08-18 10:07:14,627 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:14,631 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:14,631 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:14,632 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:07:14,635 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:14,635 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:07:14,636 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:07:14,639 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:14,639 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:07:14,640 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:07:14,645 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:14,645 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:07:14,646 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5a3eadd6 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:14,648 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5a3eadd60x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:14,649 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3615c8f2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:14,649 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:14,649 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:14,650 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5a3eadd6-0x1569e9d55410016 connected 2016-08-18 10:07:14,651 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:14,651 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59647; # active connections: 10 2016-08-18 10:07:14,652 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:14,652 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59647 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:14,653 INFO [main] impl.RestoreClientImpl(167): HBase table ns1:table1_restore does not exist. It will be created during restore process 2016-08-18 10:07:14,654 INFO [main] impl.RestoreClientImpl(167): HBase table ns2:table2_restore does not exist. It will be created during restore process 2016-08-18 10:07:14,655 INFO [main] impl.RestoreClientImpl(167): HBase table ns3:table3_restore does not exist. It will be created during restore process 2016-08-18 10:07:14,655 INFO [main] impl.RestoreClientImpl(167): HBase table ns4:table4_restore does not exist. It will be created during restore process 2016-08-18 10:07:14,655 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410016 2016-08-18 10:07:14,656 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:14,659 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-895659471) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:14,659 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59647 because read count=-1. Number of active connections: 10 2016-08-18 10:07:14,659 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:07:14,662 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:14,666 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:14,666 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:14,666 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1471539957141' to 'ns1:table1_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141 2016-08-18 10:07:14,676 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3f7e227b connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:14,679 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3f7e227b0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:14,679 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@109eeacc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:14,679 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:14,679 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:14,680 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3f7e227b-0x1569e9d55410017 connected 2016-08-18 10:07:14,681 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:14,681 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59651; # active connections: 10 2016-08-18 10:07:14,682 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:14,682 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59651 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:14,683 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns1:table1_restore' 2016-08-18 10:07:14,683 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/archive/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843 2016-08-18 10:07:14,684 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/archive/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f in region [hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/archive/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843] 2016-08-18 10:07:14,685 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:14,687 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/archive/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb first=row0 last=row98 2016-08-18 10:07:14,695 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:14,695 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59652; # active connections: 11 2016-08-18 10:07:14,696 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:14,696 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59652 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:14,698 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:07:14,803 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:07:14,805 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 10:07:14,807 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:593960000000000 2016-08-18 10:07:14,907 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 10:07:14,923 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741912_1088{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 290 2016-08-18 10:07:15,111 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 10:07:15,331 DEBUG [ProcedureExecutor-6] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:07:15,333 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns1:table1_restore 2016-08-18 10:07:15,341 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741913_1089{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:07:15,416 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 10:07:15,746 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:15,746 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.: disabling compactions & flushes 2016-08-18 10:07:15,746 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:15,746 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:15,855 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9."} 2016-08-18 10:07:15,856 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:15,857 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:07:15,919 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 10:07:15,965 INFO [ProcedureExecutor-6] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:07:15,966 ERROR [ProcedureExecutor-6] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:07:15,966 INFO [ProcedureExecutor-6] master.RegionStates(1106): Transition {ce195e475d29c825c7b292e0d7918bf9 state=OFFLINE, ts=1471540035965, server=null} to {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_OPEN, ts=1471540035966, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:15,966 INFO [ProcedureExecutor-6] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:07:15,967 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:15,968 INFO [PriorityRpcServer.handler=1,queue=1,port=59399] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:15,973 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(6339): Opening region: {ENCODED => ce195e475d29c825c7b292e0d7918bf9, NAME => 'ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:07:15,974 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore ce195e475d29c825c7b292e0d7918bf9 2016-08-18 10:07:15,974 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:15,977 INFO [StoreOpener-ce195e475d29c825c7b292e0d7918bf9-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:15,977 INFO [StoreOpener-ce195e475d29c825c7b292e0d7918bf9-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:07:15,978 DEBUG [StoreOpener-ce195e475d29c825c7b292e0d7918bf9-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f 2016-08-18 10:07:15,979 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9 2016-08-18 10:07:15,983 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:07:15,983 INFO [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(871): Onlined ce195e475d29c825c7b292e0d7918bf9; next sequenceid=2 2016-08-18 10:07:15,986 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:07:15,988 INFO [PostOpenDeployTasks:ce195e475d29c825c7b292e0d7918bf9] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:15,988 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_OPEN, ts=1471540035966, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:07:15,988 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(1106): Transition {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_OPEN, ts=1471540035966, server=10.22.9.171,59399,1471539932874} to {ce195e475d29c825c7b292e0d7918bf9 state=OPEN, ts=1471540035988, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:15,988 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:07:15,989 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:15,989 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(452): Onlined ce195e475d29c825c7b292e0d7918bf9 on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:15,990 DEBUG [ProcedureExecutor-6] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:07:15,990 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540035990,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 10:07:15,990 ERROR [B.defaultRpcServer.handler=3,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:07:15,990 DEBUG [PostOpenDeployTasks:ce195e475d29c825c7b292e0d7918bf9] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:15,991 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:15,991 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:15,992 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-18 10:07:16,319 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:593960000000000 2016-08-18 10:07:16,320 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 1.5160sec: CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=FINISHED 2016-08-18 10:07:16,923 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 10:07:16,923 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns1:table1_restore completed 2016-08-18 10:07:16,924 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:16,924 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410017 2016-08-18 10:07:16,926 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:16,927 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59651 because read count=-1. Number of active connections: 11 2016-08-18 10:07:16,927 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:59388; local cluster node: hdfs://localhost:59388 2016-08-18 10:07:16,927 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/archive/data/ns1/test-1471539957141 on local cluster, back it up before restore 2016-08-18 10:07:16,927 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-4272786) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:16,927 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59652 because read count=-1. Number of active connections: 11 2016-08-18 10:07:16,927 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-1148272917) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:16,942 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741914_1090{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:07:17,348 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 10:07:17,349 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 10:07:17,365 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843 2016-08-18 10:07:17,366 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x41355ff0 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:17,370 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x41355ff00x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:17,371 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30496823, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:17,371 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:17,372 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:17,372 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x41355ff0-0x1569e9d55410018 connected 2016-08-18 10:07:17,374 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:17,374 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59657; # active connections: 10 2016-08-18 10:07:17,375 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:17,375 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59657 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:17,380 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-18 10:07:17,389 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:17,389 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59658; # active connections: 11 2016-08-18 10:07:17,390 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:17,390 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59658 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:17,405 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:17,409 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb first=row0 last=row98 2016-08-18 10:07:17,419 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9., hostname=10.22.9.171,59399,1471539932874, seqNum=2 for row with hfile group [{[B@34bdd2,hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb}] 2016-08-18 10:07:17,426 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:17,427 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59659; # active connections: 7 2016-08-18 10:07:17,427 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:17,427 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59659 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:17,428 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(670): Validating hfile at hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb for inclusion in store f region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:17,432 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 10:07:17,432 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 10:07:17,435 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/906a862f2f2c4d12baa761fdde5898d9_SeqId_4_ 2016-08-18 10:07:17,435 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(742): Loaded HFile hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb into store 'f' as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/906a862f2f2c4d12baa761fdde5898d9_SeqId_4_ - updating store file list. 2016-08-18 10:07:17,441 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(777): Loaded HFile hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/906a862f2f2c4d12baa761fdde5898d9_SeqId_4_ into store 'f 2016-08-18 10:07:17,441 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb into store f (new location: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/906a862f2f2c4d12baa761fdde5898d9_SeqId_4_) 2016-08-18 10:07:17,445 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:07:17,447 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:17,448 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410018 2016-08-18 10:07:17,450 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:17,451 INFO [main] impl.RestoreClientImpl(292): ns1:test-1471539957141 has been successfully restored to ns1:table1_restore 2016-08-18 10:07:17,451 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (1879474228) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:17,451 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:07:17,451 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59658 because read count=-1. Number of active connections: 11 2016-08-18 10:07:17,451 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (1826927731) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:07:17,451 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59657 because read count=-1. Number of active connections: 11 2016-08-18 10:07:17,451 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (-2101892986) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:17,451 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59659 because read count=-1. Number of active connections: 7 2016-08-18 10:07:17,451 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/ 2016-08-18 10:07:17,452 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:07:17,453 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:07:17,456 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:17,456 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:07:17,456 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14715399571411' to 'ns2:table2_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411 2016-08-18 10:07:17,466 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5f3ef370 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:17,468 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5f3ef3700x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:17,469 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e838add, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:17,469 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:17,469 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:17,469 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5f3ef370-0x1569e9d55410019 connected 2016-08-18 10:07:17,471 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:17,471 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59663; # active connections: 10 2016-08-18 10:07:17,472 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:17,472 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59663 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:17,473 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns2:table2_restore' 2016-08-18 10:07:17,473 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/archive/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3 2016-08-18 10:07:17,475 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/archive/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f in region [hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/archive/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3] 2016-08-18 10:07:17,475 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:17,479 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/archive/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e first=row0 last=row98 2016-08-18 10:07:17,480 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:17,480 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59664; # active connections: 11 2016-08-18 10:07:17,481 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:17,481 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59664 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:17,482 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:07:17,584 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:07:17,586 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 10:07:17,587 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:593960000000000 2016-08-18 10:07:17,693 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 10:07:17,705 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741915_1091{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 290 2016-08-18 10:07:17,899 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 10:07:18,111 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:07:18,112 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns2:table2_restore 2016-08-18 10:07:18,120 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741916_1092{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:07:18,206 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 10:07:18,529 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:18,529 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.: disabling compactions & flushes 2016-08-18 10:07:18,529 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:18,529 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:18,637 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7."} 2016-08-18 10:07:18,639 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:18,640 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:07:18,712 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 10:07:18,744 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:07:18,745 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:07:18,745 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {b61ab1f232defc5aa4ae331a63c6cdd7 state=OFFLINE, ts=1471540038744, server=null} to {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_OPEN, ts=1471540038745, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:18,745 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:07:18,746 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:18,747 INFO [PriorityRpcServer.handler=2,queue=0,port=59399] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:18,752 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(6339): Opening region: {ENCODED => b61ab1f232defc5aa4ae331a63c6cdd7, NAME => 'ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:07:18,752 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore b61ab1f232defc5aa4ae331a63c6cdd7 2016-08-18 10:07:18,752 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:18,755 INFO [StoreOpener-b61ab1f232defc5aa4ae331a63c6cdd7-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:18,755 INFO [StoreOpener-b61ab1f232defc5aa4ae331a63c6cdd7-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:07:18,756 DEBUG [StoreOpener-b61ab1f232defc5aa4ae331a63c6cdd7-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f 2016-08-18 10:07:18,757 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7 2016-08-18 10:07:18,762 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:07:18,762 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(871): Onlined b61ab1f232defc5aa4ae331a63c6cdd7; next sequenceid=2 2016-08-18 10:07:18,763 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:07:18,764 INFO [PostOpenDeployTasks:b61ab1f232defc5aa4ae331a63c6cdd7] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:18,765 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_OPEN, ts=1471540038745, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:07:18,765 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(1106): Transition {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_OPEN, ts=1471540038745, server=10.22.9.171,59399,1471539932874} to {b61ab1f232defc5aa4ae331a63c6cdd7 state=OPEN, ts=1471540038765, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:18,765 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:07:18,765 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:18,766 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(452): Onlined b61ab1f232defc5aa4ae331a63c6cdd7 on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:18,766 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:07:18,766 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540038766,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 10:07:18,766 ERROR [B.defaultRpcServer.handler=3,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:07:18,767 DEBUG [PostOpenDeployTasks:b61ab1f232defc5aa4ae331a63c6cdd7] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:18,769 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:18,769 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:18,770 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-18 10:07:19,097 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:593960000000000 2016-08-18 10:07:19,097 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 1.5080sec: CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=FINISHED 2016-08-18 10:07:19,715 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 10:07:19,715 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns2:table2_restore completed 2016-08-18 10:07:19,715 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:19,715 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410019 2016-08-18 10:07:19,717 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:19,719 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59664 because read count=-1. Number of active connections: 11 2016-08-18 10:07:19,719 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:59388; local cluster node: hdfs://localhost:59388 2016-08-18 10:07:19,719 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/archive/data/ns2/test-14715399571411 on local cluster, back it up before restore 2016-08-18 10:07:19,719 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-1856425419) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:19,719 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59663 because read count=-1. Number of active connections: 11 2016-08-18 10:07:19,719 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (1669976045) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:19,738 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741917_1093{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:07:20,142 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 10:07:20,143 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 10:07:20,161 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3 2016-08-18 10:07:20,162 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6cb827f connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:20,166 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x6cb827f0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:20,167 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66d5fe70, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:20,168 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:20,168 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:20,168 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x6cb827f-0x1569e9d5541001a connected 2016-08-18 10:07:20,170 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:20,170 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59669; # active connections: 10 2016-08-18 10:07:20,170 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:20,171 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59669 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:20,177 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-18 10:07:20,184 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:20,184 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59670; # active connections: 11 2016-08-18 10:07:20,184 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:20,185 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59670 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:20,189 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:20,193 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e first=row0 last=row98 2016-08-18 10:07:20,196 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7., hostname=10.22.9.171,59399,1471539932874, seqNum=2 for row with hfile group [{[B@6daf6b1,hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e}] 2016-08-18 10:07:20,199 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:20,199 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59671; # active connections: 7 2016-08-18 10:07:20,201 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:20,202 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59671 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:20,202 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(670): Validating hfile at hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e for inclusion in store f region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:07:20,205 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 10:07:20,205 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 10:07:20,207 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/3ddc3cba34434d0cb7577b62195da637_SeqId_4_ 2016-08-18 10:07:20,208 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(742): Loaded HFile hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e into store 'f' as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/3ddc3cba34434d0cb7577b62195da637_SeqId_4_ - updating store file list. 2016-08-18 10:07:20,214 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(777): Loaded HFile hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/3ddc3cba34434d0cb7577b62195da637_SeqId_4_ into store 'f 2016-08-18 10:07:20,214 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e into store f (new location: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/3ddc3cba34434d0cb7577b62195da637_SeqId_4_) 2016-08-18 10:07:20,214 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:07:20,215 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:20,215 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541001a 2016-08-18 10:07:20,218 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:20,219 INFO [main] impl.RestoreClientImpl(292): ns2:test-14715399571411 has been successfully restored to ns2:table2_restore 2016-08-18 10:07:20,219 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:07:20,219 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-967439142) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:07:20,219 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/ 2016-08-18 10:07:20,219 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59671 because read count=-1. Number of active connections: 7 2016-08-18 10:07:20,219 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59670 because read count=-1. Number of active connections: 11 2016-08-18 10:07:20,219 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59669 because read count=-1. Number of active connections: 11 2016-08-18 10:07:20,219 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-457207438) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:20,219 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (343736004) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:20,219 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:07:20,221 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:07:20,223 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:20,223 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:07:20,223 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14715399571412' to 'ns3:table3_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412 2016-08-18 10:07:20,229 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/archive/data/ns3/test-14715399571412 does not exists 2016-08-18 10:07:20,229 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14715399571412, will only create table 2016-08-18 10:07:20,229 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x16b33e9b connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:20,231 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x16b33e9b0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:20,232 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e67fb3e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:20,232 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:20,232 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:20,233 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x16b33e9b-0x1569e9d5541001b connected 2016-08-18 10:07:20,234 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:20,234 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59675; # active connections: 10 2016-08-18 10:07:20,235 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:20,235 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59675 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:20,236 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns3:table3_restore' 2016-08-18 10:07:20,237 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:20,238 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59676; # active connections: 11 2016-08-18 10:07:20,238 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:20,238 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59676 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:20,240 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:07:20,346 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:07:20,348 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 10:07:20,350 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:593960000000000 2016-08-18 10:07:20,450 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 10:07:20,462 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741918_1094{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 291 2016-08-18 10:07:20,654 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 10:07:20,870 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:07:20,871 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns3:table3_restore 2016-08-18 10:07:20,880 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741919_1095{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:07:20,961 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 10:07:21,284 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:07:21,284 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.: disabling compactions & flushes 2016-08-18 10:07:21,284 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:07:21,284 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:07:21,397 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876."} 2016-08-18 10:07:21,398 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:21,399 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:07:21,467 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 10:07:21,482 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:07:21,508 INFO [ProcedureExecutor-1] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:07:21,509 ERROR [ProcedureExecutor-1] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:07:21,509 INFO [ProcedureExecutor-1] master.RegionStates(1106): Transition {36ac3931d4f13816604ff9289aebc876 state=OFFLINE, ts=1471540041508, server=null} to {36ac3931d4f13816604ff9289aebc876 state=PENDING_OPEN, ts=1471540041509, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:21,509 INFO [ProcedureExecutor-1] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:07:21,509 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:21,510 INFO [PriorityRpcServer.handler=4,queue=0,port=59399] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:07:21,516 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(6339): Opening region: {ENCODED => 36ac3931d4f13816604ff9289aebc876, NAME => 'ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:07:21,516 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore 36ac3931d4f13816604ff9289aebc876 2016-08-18 10:07:21,516 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:07:21,524 INFO [StoreOpener-36ac3931d4f13816604ff9289aebc876-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:21,525 INFO [StoreOpener-36ac3931d4f13816604ff9289aebc876-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:07:21,525 DEBUG [StoreOpener-36ac3931d4f13816604ff9289aebc876-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/f 2016-08-18 10:07:21,530 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876 2016-08-18 10:07:21,538 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:07:21,538 INFO [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(871): Onlined 36ac3931d4f13816604ff9289aebc876; next sequenceid=2 2016-08-18 10:07:21,539 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:07:21,540 INFO [PostOpenDeployTasks:36ac3931d4f13816604ff9289aebc876] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:07:21,541 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {36ac3931d4f13816604ff9289aebc876 state=PENDING_OPEN, ts=1471540041509, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:07:21,541 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(1106): Transition {36ac3931d4f13816604ff9289aebc876 state=PENDING_OPEN, ts=1471540041509, server=10.22.9.171,59399,1471539932874} to {36ac3931d4f13816604ff9289aebc876 state=OPEN, ts=1471540041541, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:21,541 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:07:21,541 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:21,542 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(452): Onlined 36ac3931d4f13816604ff9289aebc876 on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:21,542 DEBUG [ProcedureExecutor-1] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:07:21,542 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540041542,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 10:07:21,542 ERROR [B.defaultRpcServer.handler=3,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:07:21,543 DEBUG [PostOpenDeployTasks:36ac3931d4f13816604ff9289aebc876] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:07:21,543 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:21,543 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:21,544 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-18 10:07:21,869 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:593960000000000 2016-08-18 10:07:21,869 DEBUG [ProcedureExecutor-1] procedure2.ProcedureExecutor(870): Procedure completed in 1.5230sec: CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=FINISHED 2016-08-18 10:07:22,473 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 10:07:22,473 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns3:table3_restore completed 2016-08-18 10:07:22,474 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:22,474 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541001b 2016-08-18 10:07:22,476 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:22,477 INFO [main] impl.RestoreClientImpl(292): ns3:test-14715399571412 has been successfully restored to ns3:table3_restore 2016-08-18 10:07:22,478 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:07:22,478 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/ 2016-08-18 10:07:22,478 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59676 because read count=-1. Number of active connections: 11 2016-08-18 10:07:22,478 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-1493757059) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:22,478 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59675 because read count=-1. Number of active connections: 11 2016-08-18 10:07:22,478 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-888951175) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:22,478 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:07:22,479 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:07:22,482 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:22,482 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:07:22,482 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns4:test-14715399571413' to 'ns4:table4_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413 2016-08-18 10:07:22,488 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/archive/data/ns4/test-14715399571413 does not exists 2016-08-18 10:07:22,488 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns4:test-14715399571413, will only create table 2016-08-18 10:07:22,489 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x38ebbf56 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:22,491 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x38ebbf560x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:22,492 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@103200b7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:22,493 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:22,493 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x38ebbf56-0x1569e9d5541001c connected 2016-08-18 10:07:22,493 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:22,495 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:22,495 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59683; # active connections: 10 2016-08-18 10:07:22,495 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:22,496 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59683 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:22,496 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns4:table4_restore' 2016-08-18 10:07:22,498 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:22,498 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59684; # active connections: 11 2016-08-18 10:07:22,498 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:22,498 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59684 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:22,500 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 10:07:22,604 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:07:22,608 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 10:07:22,609 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:table4_restore/write-master:593960000000000 2016-08-18 10:07:22,710 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 10:07:22,727 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741920_1096{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 291 2016-08-18 10:07:22,917 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 10:07:23,137 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/table4_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:07:23,138 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(6162): creating HRegion ns4:table4_restore HTD == 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns4:table4_restore 2016-08-18 10:07:23,147 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741921_1097{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:07:23,224 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 10:07:23,548 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:07:23,549 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.: disabling compactions & flushes 2016-08-18 10:07:23,549 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:07:23,549 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:07:23,662 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385."} 2016-08-18 10:07:23,663 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:23,664 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:07:23,730 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 10:07:23,773 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:07:23,774 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:07:23,775 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {1b9df2550cafc7710dd1c6ec60242385 state=OFFLINE, ts=1471540043773, server=null} to {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_OPEN, ts=1471540043775, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:23,775 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:07:23,775 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:23,777 INFO [PriorityRpcServer.handler=0,queue=0,port=59399] regionserver.RSRpcServices(1666): Open ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:07:23,781 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(6339): Opening region: {ENCODED => 1b9df2550cafc7710dd1c6ec60242385, NAME => 'ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:07:23,782 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table4_restore 1b9df2550cafc7710dd1c6ec60242385 2016-08-18 10:07:23,782 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:07:23,785 INFO [StoreOpener-1b9df2550cafc7710dd1c6ec60242385-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:23,785 INFO [StoreOpener-1b9df2550cafc7710dd1c6ec60242385-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:07:23,786 DEBUG [StoreOpener-1b9df2550cafc7710dd1c6ec60242385-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/f 2016-08-18 10:07:23,787 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385 2016-08-18 10:07:23,792 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:07:23,792 INFO [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(871): Onlined 1b9df2550cafc7710dd1c6ec60242385; next sequenceid=2 2016-08-18 10:07:23,792 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:07:23,793 INFO [PostOpenDeployTasks:1b9df2550cafc7710dd1c6ec60242385] regionserver.HRegionServer(1952): Post open deploy tasks for ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:07:23,793 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_OPEN, ts=1471540043775, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:07:23,793 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStates(1106): Transition {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_OPEN, ts=1471540043775, server=10.22.9.171,59399,1471539932874} to {1b9df2550cafc7710dd1c6ec60242385 state=OPEN, ts=1471540043793, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:23,793 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:07:23,794 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:23,794 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStates(452): Onlined 1b9df2550cafc7710dd1c6ec60242385 on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:23,794 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:07:23,795 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540043794,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:table4_restore"} 2016-08-18 10:07:23,795 ERROR [B.defaultRpcServer.handler=1,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:07:23,798 DEBUG [PostOpenDeployTasks:1b9df2550cafc7710dd1c6ec60242385] regionserver.HRegionServer(1979): Finished post open deploy task for ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:07:23,799 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:23,799 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] handler.OpenRegionHandler(126): Opened ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:23,799 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns4:table4_restore state to ENABLED in META 2016-08-18 10:07:24,127 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:table4_restore/write-master:593960000000000 2016-08-18 10:07:24,127 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 1.5170sec: CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=FINISHED 2016-08-18 10:07:24,737 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 10:07:24,738 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns4:table4_restore completed 2016-08-18 10:07:24,738 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:24,738 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541001c 2016-08-18 10:07:24,741 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:24,742 INFO [main] impl.RestoreClientImpl(292): ns4:test-14715399571413 has been successfully restored to ns4:table4_restore 2016-08-18 10:07:24,742 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:07:24,742 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/ 2016-08-18 10:07:24,743 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-18 10:07:24,742 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59683 because read count=-1. Number of active connections: 11 2016-08-18 10:07:24,742 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59684 because read count=-1. Number of active connections: 11 2016-08-18 10:07:24,742 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-605155766) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:24,742 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (520502387) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:24,743 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1471539957141, ns2:test-14715399571411, ns3:test-14715399571412, ns4:test-14715399571413] are successful! 2016-08-18 10:07:24,783 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:24,787 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:07:24,787 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:24,788 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:07:24,791 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:07:24,791 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:07:24,792 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:07:24,795 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:07:24,795 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:07:24,795 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5c8ef71b connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:24,799 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5c8ef71b0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:24,800 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43ac4323, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:24,800 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:24,800 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:24,801 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5c8ef71b-0x1569e9d5541001d connected 2016-08-18 10:07:24,802 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:24,802 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59694; # active connections: 10 2016-08-18 10:07:24,803 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:24,803 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59694 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:24,811 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541001d 2016-08-18 10:07:24,811 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:24,812 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:07:24,812 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1326476475) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:24,812 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59694 because read count=-1. Number of active connections: 10 2016-08-18 10:07:24,812 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:24,815 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:07:24,815 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:07:24,816 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1471539957141' to 'ns1:table1_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141 2016-08-18 10:07:24,824 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6558493e connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:24,826 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x6558493e0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:24,827 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7924f043, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:24,827 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:24,827 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:24,828 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x6558493e-0x1569e9d5541001e connected 2016-08-18 10:07:24,829 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:24,829 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59698; # active connections: 10 2016-08-18 10:07:24,830 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:24,830 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59698 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:24,831 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns1:table1_restore', preserving region splits 2016-08-18 10:07:24,833 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:24,833 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59699; # active connections: 11 2016-08-18 10:07:24,834 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:24,834 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59699 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:24,835 INFO [main] client.HBaseAdmin$10(780): Started disable of ns1:table1_restore 2016-08-18 10:07:24,838 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns1:table1_restore 2016-08-18 10:07:24,952 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-18 10:07:24,954 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:593960000000001 2016-08-18 10:07:24,956 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 10:07:25,060 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 10:07:25,171 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540045171,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 10:07:25,172 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:25,173 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLING in META 2016-08-18 10:07:25,264 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 10:07:25,277 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-18 10:07:25,281 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. (offlining), current state: {ce195e475d29c825c7b292e0d7918bf9 state=OPEN, ts=1471540035988, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:25,281 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {ce195e475d29c825c7b292e0d7918bf9 state=OPEN, ts=1471540035988, server=10.22.9.171,59399,1471539932874} to {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_CLOSE, ts=1471540045281, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:25,281 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. with state=PENDING_CLOSE 2016-08-18 10:07:25,281 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:25,285 INFO [PriorityRpcServer.handler=3,queue=1,port=59399] regionserver.RSRpcServices(1314): Close ce195e475d29c825c7b292e0d7918bf9, moving to null 2016-08-18 10:07:25,285 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:25,285 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.: disabling compactions & flushes 2016-08-18 10:07:25,285 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:25,287 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,59399,1471539932874 for region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:25,287 INFO [StoreCloserThread-ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.-1] regionserver.HStore(839): Closed f 2016-08-18 10:07:25,288 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:07:25,292 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-18 10:07:25,295 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:25,296 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.AssignmentManager(2884): Got transition CLOSED for {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_CLOSE, ts=1471540045281, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:07:25,297 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(1106): Transition {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_CLOSE, ts=1471540045281, server=10.22.9.171,59399,1471539932874} to {ce195e475d29c825c7b292e0d7918bf9 state=OFFLINE, ts=1471540045297, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:25,297 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. with state=OFFLINE 2016-08-18 10:07:25,297 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:25,298 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(590): Offlined ce195e475d29c825c7b292e0d7918bf9 from 10.22.9.171,59399,1471539932874 2016-08-18 10:07:25,298 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:25,439 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540045439,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 10:07:25,440 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:25,441 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLED in META 2016-08-18 10:07:25,441 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(424): Disabled table, ns1:table1_restore, is completed. 2016-08-18 10:07:25,571 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 10:07:25,655 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:593960000000001 2016-08-18 10:07:25,655 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 707msec: DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=FINISHED 2016-08-18 10:07:26,074 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 10:07:26,074 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns1:table1_restore completed 2016-08-18 10:07:26,076 INFO [main] client.HBaseAdmin$8(615): Started truncating ns1:table1_restore 2016-08-18 10:07:26,080 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns1:table1_restore 2016-08-18 10:07:26,191 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:07:26,194 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:593960000000002 2016-08-18 10:07:26,196 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(87): waiting for 'ns1:table1_restore' regions in transition 2016-08-18 10:07:26,306 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471540046305,"tag":[],"qualifier":"","vlen":0}]},"row":"ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9."} 2016-08-18 10:07:26,307 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:26,308 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => ce195e475d29c825c7b292e0d7918bf9, NAME => 'ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.', STARTKEY => '', ENDKEY => ''}] 2016-08-18 10:07:26,311 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(408): Removing 'ns1:table1_restore' from region states. 2016-08-18 10:07:26,312 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(412): Marking 'ns1:table1_restore' as deleted. 2016-08-18 10:07:26,313 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471540046312,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns1:table1_restore"} 2016-08-18 10:07:26,313 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:26,314 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1726): Deleted table ns1:table1_restore state from META 2016-08-18 10:07:26,423 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(340): Archiving region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. from FS 2016-08-18 10:07:26,427 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9 2016-08-18 10:07:26,432 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/recovered.edits] 2016-08-18 10:07:26,440 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/906a862f2f2c4d12baa761fdde5898d9_SeqId_4_, to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/archive/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/906a862f2f2c4d12baa761fdde5898d9_SeqId_4_ 2016-08-18 10:07:26,445 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/recovered.edits/6.seqid, to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/archive/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/recovered.edits/6.seqid 2016-08-18 10:07:26,446 INFO [IPC Server handler 5 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741913_1089 127.0.0.1:59389 2016-08-18 10:07:26,447 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9 2016-08-18 10:07:26,447 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(344): Table 'ns1:table1_restore' archived! 2016-08-18 10:07:26,448 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741912_1088 127.0.0.1:59389 2016-08-18 10:07:26,563 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741922_1098{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 290 2016-08-18 10:07:26,968 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:07:26,970 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns1:table1_restore 2016-08-18 10:07:26,979 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741923_1099{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:07:27,023 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:07:27,383 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:27,384 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.: disabling compactions & flushes 2016-08-18 10:07:27,384 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:27,384 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:27,490 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9."} 2016-08-18 10:07:27,491 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:27,492 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:07:27,598 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:07:27,599 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:07:27,600 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {ce195e475d29c825c7b292e0d7918bf9 state=OFFLINE, ts=1471540047598, server=null} to {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_OPEN, ts=1471540047600, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:27,600 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:07:27,600 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:27,602 INFO [PriorityRpcServer.handler=2,queue=0,port=59399] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:27,607 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(6339): Opening region: {ENCODED => ce195e475d29c825c7b292e0d7918bf9, NAME => 'ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:07:27,607 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore ce195e475d29c825c7b292e0d7918bf9 2016-08-18 10:07:27,607 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:27,610 INFO [StoreOpener-ce195e475d29c825c7b292e0d7918bf9-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:27,611 INFO [StoreOpener-ce195e475d29c825c7b292e0d7918bf9-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:07:27,611 DEBUG [StoreOpener-ce195e475d29c825c7b292e0d7918bf9-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f 2016-08-18 10:07:27,612 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9 2016-08-18 10:07:27,616 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:07:27,616 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(871): Onlined ce195e475d29c825c7b292e0d7918bf9; next sequenceid=2 2016-08-18 10:07:27,617 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:07:27,617 INFO [PostOpenDeployTasks:ce195e475d29c825c7b292e0d7918bf9] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:27,618 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_OPEN, ts=1471540047600, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:07:27,618 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStates(1106): Transition {ce195e475d29c825c7b292e0d7918bf9 state=PENDING_OPEN, ts=1471540047600, server=10.22.9.171,59399,1471539932874} to {ce195e475d29c825c7b292e0d7918bf9 state=OPEN, ts=1471540047618, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:07:27,618 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:07:27,619 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:27,619 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStates(452): Onlined ce195e475d29c825c7b292e0d7918bf9 on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:27,620 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:07:27,620 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540047620,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 10:07:27,620 ERROR [B.defaultRpcServer.handler=0,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:07:27,620 DEBUG [PostOpenDeployTasks:ce195e475d29c825c7b292e0d7918bf9] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:27,620 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:07:27,621 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. on 10.22.9.171,59399,1471539932874 2016-08-18 10:07:27,622 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-18 10:07:27,724 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(129): truncate 'ns1:table1_restore' completed 2016-08-18 10:07:27,829 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:593960000000002 2016-08-18 10:07:27,829 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 1.6430sec: TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=FINISHED 2016-08-18 10:07:27,970 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=20 2016-08-18 10:07:27,970 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns1:table1_restore completed 2016-08-18 10:07:27,971 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:27,971 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541001e 2016-08-18 10:07:27,972 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:27,973 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59698 because read count=-1. Number of active connections: 11 2016-08-18 10:07:27,973 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:59388; local cluster node: hdfs://localhost:59388 2016-08-18 10:07:27,973 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/archive/data/ns1/test-1471539957141 on local cluster, back it up before restore 2016-08-18 10:07:27,973 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (1486047761) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:27,973 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59699 because read count=-1. Number of active connections: 11 2016-08-18 10:07:27,973 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-1544008933) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:27,989 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741924_1100{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:07:28,079 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741912_1088, blk_1073741913_1089] 2016-08-18 10:07:28,391 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 10:07:28,391 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 10:07:28,409 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843 2016-08-18 10:07:28,410 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7841c28b connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:28,415 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7841c28b0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:28,416 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@739e71ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:28,416 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:28,417 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:28,417 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7841c28b-0x1569e9d5541001f connected 2016-08-18 10:07:28,419 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:28,419 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59705; # active connections: 10 2016-08-18 10:07:28,420 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:28,420 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59705 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:28,426 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-18 10:07:28,432 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:28,432 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59706; # active connections: 11 2016-08-18 10:07:28,432 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:28,433 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59706 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:28,438 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:07:28,441 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb first=row0 last=row98 2016-08-18 10:07:28,445 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9., hostname=10.22.9.171,59399,1471539932874, seqNum=2 for row with hfile group [{[B@64b642e8,hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb}] 2016-08-18 10:07:28,446 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:28,446 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59707; # active connections: 7 2016-08-18 10:07:28,447 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:28,447 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59707 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:28,447 INFO [B.defaultRpcServer.handler=3,queue=0,port=59399] regionserver.HStore(670): Validating hfile at hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb for inclusion in store f region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:07:28,451 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59399] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 10:07:28,451 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59399] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 10:07:28,452 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59399] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/1ab8ea84eca346a8b0a7594c6fe59a72_SeqId_4_ 2016-08-18 10:07:28,453 INFO [B.defaultRpcServer.handler=3,queue=0,port=59399] regionserver.HStore(742): Loaded HFile hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb into store 'f' as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/1ab8ea84eca346a8b0a7594c6fe59a72_SeqId_4_ - updating store file list. 2016-08-18 10:07:28,459 INFO [B.defaultRpcServer.handler=3,queue=0,port=59399] regionserver.HStore(777): Loaded HFile hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/1ab8ea84eca346a8b0a7594c6fe59a72_SeqId_4_ into store 'f 2016-08-18 10:07:28,459 INFO [B.defaultRpcServer.handler=3,queue=0,port=59399] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/3c1d62f1b34f7382cb57de1ded772843/f/2b064a5eb2b34ec7bc195a73be8392cb into store f (new location: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/1ab8ea84eca346a8b0a7594c6fe59a72_SeqId_4_) 2016-08-18 10:07:28,459 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:07:28,460 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:28,460 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541001f 2016-08-18 10:07:28,462 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:28,463 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59707 because read count=-1. Number of active connections: 7 2016-08-18 10:07:28,463 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59705 because read count=-1. Number of active connections: 11 2016-08-18 10:07:28,463 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (-1441116000) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:28,463 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-1536291737) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:28,463 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (2130424177) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:07:28,463 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59706 because read count=-1. Number of active connections: 11 2016-08-18 10:07:28,464 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns1:test-1471539957141' to 'ns1:table1_restore' from log dirs: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:07:28,465 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x642f43b7 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:28,467 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x642f43b70x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:28,468 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f268aec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:28,468 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:28,468 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:28,469 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x642f43b7-0x1569e9d55410020 connected 2016-08-18 10:07:28,470 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:28,470 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59709; # active connections: 10 2016-08-18 10:07:28,471 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:28,471 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59709 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:28,476 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:59388/backupUT/backup_1471540016356/WALs from hbase tables ,ns1:test-1471539957141 to tables ,ns1:table1_restore 2016-08-18 10:07:28,476 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns1:test-1471539957141 into ns1:table1_restore 2016-08-18 10:07:28,480 DEBUG [main] mapreduce.WALPlayer(307): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476 from hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:07:28,482 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6714453c connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:28,484 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x6714453c0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:28,485 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70f6b0d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:28,485 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:28,485 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:28,486 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x6714453c-0x1569e9d55410021 connected 2016-08-18 10:07:28,487 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:07:28,487 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59711; # active connections: 11 2016-08-18 10:07:28,488 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:28,488 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59711 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:28,494 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-18 10:07:28,494 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns1:test-1471539957141 2016-08-18 10:07:28,497 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:28,497 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59712; # active connections: 12 2016-08-18 10:07:28,498 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:28,498 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59712 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:28,501 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-18 10:07:28,502 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_ac615f6a-5982-40a6-8aa4-76ddf8cdf55f 2016-08-18 10:07:28,514 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741925_1101{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 153 2016-08-18 10:07:28,923 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:07:29,132 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-1596628949818002387.jar 2016-08-18 10:07:30,282 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-2737282748553836920.jar 2016-08-18 10:07:30,656 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5314073342628499251.jar 2016-08-18 10:07:30,677 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7804284160183899759.jar 2016-08-18 10:07:31,864 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6008174791021713153.jar 2016-08-18 10:07:31,864 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:07:31,864 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:07:31,865 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:07:31,865 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:07:31,866 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:07:31,866 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:07:32,074 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6479041857354082839.jar 2016-08-18 10:07:32,075 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6479041857354082839.jar 2016-08-18 10:07:33,115 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:07:33,259 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5810312487265850337.jar 2016-08-18 10:07:33,260 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6479041857354082839.jar 2016-08-18 10:07:33,260 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6479041857354082839.jar 2016-08-18 10:07:33,261 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5810312487265850337.jar 2016-08-18 10:07:33,261 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:07:33,262 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns1:test-1471539957141 output configured. 2016-08-18 10:07:33,262 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:07:33,262 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410021 2016-08-18 10:07:33,262 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:33,263 DEBUG [main] mapreduce.WALPlayer(324): success configuring load incremental job 2016-08-18 10:07:33,264 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-1452980258) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:33,264 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59711 because read count=-1. Number of active connections: 12 2016-08-18 10:07:33,264 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59712 because read count=-1. Number of active connections: 12 2016-08-18 10:07:33,264 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (919064514) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:33,264 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:07:33,395 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741926_1102{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1556922 2016-08-18 10:07:33,814 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741927_1103{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 533455 2016-08-18 10:07:34,225 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741928_1104{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 38156 2016-08-18 10:07:34,496 DEBUG [10.22.9.171,59399,1471539932874_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:07:34,643 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741929_1105{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:07:34,651 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741930_1106{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 112558 2016-08-18 10:07:34,902 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:07:34,904 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x45654144 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:07:34,908 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x456541440x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:07:34,909 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20e6c032, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:07:34,909 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:07:34,909 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:07:34,909 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x45654144-0x1569e9d55410022 connected 2016-08-18 10:07:34,909 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 10:07:34,912 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:34,912 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59723; # active connections: 11 2016-08-18 10:07:34,912 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:34,913 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59723 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:34,916 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:07:34,916 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59724; # active connections: 7 2016-08-18 10:07:34,917 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:07:34,917 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59724 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:07:34,919 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:07:34,920 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:07:34,920 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:07:34,921 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:07:34,921 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:07:34,922 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:07:34,922 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:07:34,923 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:07:34,923 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:07:34,924 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:07:34,924 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:07:34,925 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:07:34,925 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:07:34,925 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:07:34,926 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410022 2016-08-18 10:07:34,926 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:07:34,927 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (279792410) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:07:34,927 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59723 because read count=-1. Number of active connections: 11 2016-08-18 10:07:34,927 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-1520914301) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:07:34,927 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59724 because read count=-1. Number of active connections: 7 2016-08-18 10:07:35,087 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741931_1107{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:07:35,100 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741932_1108{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1475955 2016-08-18 10:07:35,514 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741933_1109{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:07:35,524 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741934_1110{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 792964 2016-08-18 10:07:35,949 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741935_1111{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:07:35,961 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741936_1112{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1351207 2016-08-18 10:07:36,378 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741937_1113{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 662657 2016-08-18 10:07:36,801 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741938_1114{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1795932 2016-08-18 10:07:37,229 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741939_1115{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 4516740 2016-08-18 10:07:37,637 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:07:37,655 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:59388/backupUT/backup_1471540016356/WALs for WAL files 2016-08-18 10:07:37,658 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 10:07:37,658 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024240; access_time=1471540023826; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,658 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471540022532; access_time=1471540022117; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,658 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024666; access_time=1471540024253; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,659 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471540022966; access_time=1471540022551; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,659 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471540025094; access_time=1471540024679; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,659 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471540023391; access_time=1471540022979; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,659 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471540025521; access_time=1471540025107; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,659 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528; isDirectory=false; length=1196; replication=1; blocksize=134217728; modification_time=1471540023814; access_time=1471540023404; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:07:37,669 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741940_1116{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1647 2016-08-18 10:07:38,082 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741941_1117{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 59 2016-08-18 10:07:38,500 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741942_1118{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:07:38,681 WARN [ResourceManager Event Processor] capacity.LeafQueue(632): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:07:38,682 WARN [ResourceManager Event Processor] capacity.LeafQueue(653): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:07:39,257 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:40,326 DEBUG [10.22.9.171,59441,1471539940207_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:07:40,361 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:07:40,625 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:07:40,625 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/meta 2016-08-18 10:07:40,625 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info 2016-08-18 10:07:40,626 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:07:40,626 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/session 2016-08-18 10:07:44,495 INFO [Socket Reader #1 for port 59477] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:44,752 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741943_1119{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:07:46,744 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:46,744 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:47,598 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:47,598 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:48,614 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:49,623 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:52,070 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:52,100 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000002 is : 143 2016-08-18 10:07:52,651 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:53,783 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:53,822 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000004 is : 143 2016-08-18 10:07:54,324 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:54,348 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000003 is : 143 2016-08-18 10:07:54,366 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:54,389 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000005 is : 143 2016-08-18 10:07:54,665 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:55,135 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:55,156 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000006 is : 143 2016-08-18 10:07:55,606 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:55,623 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000007 is : 143 2016-08-18 10:07:56,680 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:57,047 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:57,066 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000008 is : 143 2016-08-18 10:07:58,185 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:07:58,199 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000009 is : 143 2016-08-18 10:08:01,249 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59825; # active connections: 11 2016-08-18 10:08:01,620 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:01,620 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59825 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:01,832 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59825 because read count=-1. Number of active connections: 11 2016-08-18 10:08:02,462 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741945_1121{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:02,489 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:08:02,504 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0001_01_000010 is : 143 2016-08-18 10:08:02,542 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741944_1120{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 16357 2016-08-18 10:08:02,551 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741946_1122{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:02,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741947_1123{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:02,590 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741948_1124{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:03,616 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741940_1116 127.0.0.1:59389 2016-08-18 10:08:03,616 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741941_1117 127.0.0.1:59389 2016-08-18 10:08:03,616 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741942_1118 127.0.0.1:59389 2016-08-18 10:08:03,616 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741944_1120 127.0.0.1:59389 2016-08-18 10:08:03,616 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741943_1119 127.0.0.1:59389 2016-08-18 10:08:03,616 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741938_1114 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741929_1105 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741935_1111 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741936_1112 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741939_1115 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741931_1107 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741937_1113 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741928_1104 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741926_1102 127.0.0.1:59389 2016-08-18 10:08:03,617 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741932_1108 127.0.0.1:59389 2016-08-18 10:08:03,618 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741930_1106 127.0.0.1:59389 2016-08-18 10:08:03,618 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741933_1109 127.0.0.1:59389 2016-08-18 10:08:03,618 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741927_1103 127.0.0.1:59389 2016-08-18 10:08:03,618 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741934_1110 127.0.0.1:59389 2016-08-18 10:08:04,109 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741926_1102, blk_1073741927_1103, blk_1073741928_1104, blk_1073741929_1105, blk_1073741930_1106, blk_1073741931_1107, blk_1073741932_1108, blk_1073741933_1109, blk_1073741934_1110, blk_1073741935_1111, blk_1073741936_1112, blk_1073741937_1113, blk_1073741938_1114, blk_1073741939_1115, blk_1073741940_1116, blk_1073741941_1117, blk_1073741942_1118, blk_1073741943_1119, blk_1073741944_1120] 2016-08-18 10:08:04,220 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476 2016-08-18 10:08:04,220 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xf87a435 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:04,224 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xf87a4350x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:04,225 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2de225c5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:04,225 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:04,225 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:04,226 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xf87a435-0x1569e9d55410024 connected 2016-08-18 10:08:04,228 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:04,228 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59834; # active connections: 11 2016-08-18 10:08:04,228 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:04,229 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59834 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:04,236 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-18 10:08:04,238 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476/_SUCCESS 2016-08-18 10:08:04,243 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:08:04,243 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59836; # active connections: 12 2016-08-18 10:08:04,244 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:04,244 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59836 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:04,249 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:08:04,252 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476/f/5f2b260cb9224fc393948b64ee6f0d3f first=row-t10 last=row98 2016-08-18 10:08:04,256 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9., hostname=10.22.9.171,59399,1471539932874, seqNum=2 for row with hfile group [{[B@207e18c4,hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476/f/5f2b260cb9224fc393948b64ee6f0d3f}] 2016-08-18 10:08:04,258 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:04,258 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59837; # active connections: 7 2016-08-18 10:08:04,258 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:04,259 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59837 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:04,259 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(670): Validating hfile at hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476/f/5f2b260cb9224fc393948b64ee6f0d3f for inclusion in store f region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:08:04,263 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(682): HFile bounds: first=row-t10 last=row98 2016-08-18 10:08:04,263 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 10:08:04,264 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476/f/5f2b260cb9224fc393948b64ee6f0d3f as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/8f861dec10dd4a8f928ee5328527c67b_SeqId_6_ 2016-08-18 10:08:04,267 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(742): Loaded HFile hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476/f/5f2b260cb9224fc393948b64ee6f0d3f into store 'f' as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/8f861dec10dd4a8f928ee5328527c67b_SeqId_6_ - updating store file list. 2016-08-18 10:08:04,272 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(777): Loaded HFile hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/8f861dec10dd4a8f928ee5328527c67b_SeqId_6_ into store 'f 2016-08-18 10:08:04,272 INFO [B.defaultRpcServer.handler=2,queue=0,port=59399] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471540048476/f/5f2b260cb9224fc393948b64ee6f0d3f into store f (new location: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/f/8f861dec10dd4a8f928ee5328527c67b_SeqId_6_) 2016-08-18 10:08:04,273 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:08:04,274 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:08:04,274 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410024 2016-08-18 10:08:04,275 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:04,276 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-18 10:08:04,276 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59837 because read count=-1. Number of active connections: 7 2016-08-18 10:08:04,276 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410020 2016-08-18 10:08:04,276 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-966542067) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:04,276 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-1496627965) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:08:04,276 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-839965480) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:04,276 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59836 because read count=-1. Number of active connections: 12 2016-08-18 10:08:04,276 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59834 because read count=-1. Number of active connections: 12 2016-08-18 10:08:04,277 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:04,277 INFO [main] impl.RestoreClientImpl(292): ns1:test-1471539957141 has been successfully restored to ns1:table1_restore 2016-08-18 10:08:04,277 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59709 because read count=-1. Number of active connections: 10 2016-08-18 10:08:04,277 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:08:04,277 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (-1692196428) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:04,277 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns1/test-1471539957141/ 2016-08-18 10:08:04,277 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471540016356 hdfs://localhost:59388/backupUT/backup_1471540016356/ns1/test-1471539957141/ 2016-08-18 10:08:04,278 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:08:04,279 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:08:04,282 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:08:04,282 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:08:04,282 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14715399571411' to 'ns2:table2_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411 2016-08-18 10:08:04,303 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3d3facd7 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:04,305 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3d3facd70x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:04,306 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ed606ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:04,306 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:04,306 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:04,307 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3d3facd7-0x1569e9d55410025 connected 2016-08-18 10:08:04,308 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:04,308 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59842; # active connections: 10 2016-08-18 10:08:04,309 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:04,309 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59842 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:04,310 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns2:table2_restore', preserving region splits 2016-08-18 10:08:04,311 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:08:04,311 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59843; # active connections: 11 2016-08-18 10:08:04,312 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:04,312 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59843 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:04,312 INFO [main] client.HBaseAdmin$10(780): Started disable of ns2:table2_restore 2016-08-18 10:08:04,313 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns2:table2_restore 2016-08-18 10:08:04,421 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-18 10:08:04,424 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 10:08:04,425 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:593960000000001 2016-08-18 10:08:04,529 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 10:08:04,633 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540084633,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 10:08:04,635 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:04,636 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLING in META 2016-08-18 10:08:04,734 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 10:08:04,742 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-18 10:08:04,744 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. (offlining), current state: {b61ab1f232defc5aa4ae331a63c6cdd7 state=OPEN, ts=1471540038765, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:04,744 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {b61ab1f232defc5aa4ae331a63c6cdd7 state=OPEN, ts=1471540038765, server=10.22.9.171,59399,1471539932874} to {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_CLOSE, ts=1471540084744, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:04,744 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. with state=PENDING_CLOSE 2016-08-18 10:08:04,745 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:04,746 INFO [PriorityRpcServer.handler=2,queue=0,port=59399] regionserver.RSRpcServices(1314): Close b61ab1f232defc5aa4ae331a63c6cdd7, moving to null 2016-08-18 10:08:04,747 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:04,747 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.: disabling compactions & flushes 2016-08-18 10:08:04,747 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,59399,1471539932874 for region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:04,747 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:04,748 INFO [StoreCloserThread-ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.-1] regionserver.HStore(839): Closed f 2016-08-18 10:08:04,749 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:08:04,755 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-18 10:08:04,756 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:04,757 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.AssignmentManager(2884): Got transition CLOSED for {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_CLOSE, ts=1471540084744, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:08:04,757 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStates(1106): Transition {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_CLOSE, ts=1471540084744, server=10.22.9.171,59399,1471539932874} to {b61ab1f232defc5aa4ae331a63c6cdd7 state=OFFLINE, ts=1471540084757, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:04,757 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. with state=OFFLINE 2016-08-18 10:08:04,757 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:04,758 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStates(590): Offlined b61ab1f232defc5aa4ae331a63c6cdd7 from 10.22.9.171,59399,1471539932874 2016-08-18 10:08:04,759 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:04,900 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540084900,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 10:08:04,902 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:04,903 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLED in META 2016-08-18 10:08:04,903 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(424): Disabled table, ns2:table2_restore, is completed. 2016-08-18 10:08:05,041 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 10:08:05,112 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:593960000000001 2016-08-18 10:08:05,113 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 697msec: DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=FINISHED 2016-08-18 10:08:05,546 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 10:08:05,546 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns2:table2_restore completed 2016-08-18 10:08:05,547 INFO [main] client.HBaseAdmin$8(615): Started truncating ns2:table2_restore 2016-08-18 10:08:05,548 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns2:table2_restore 2016-08-18 10:08:05,656 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:08:05,660 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:593960000000002 2016-08-18 10:08:05,661 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(87): waiting for 'ns2:table2_restore' regions in transition 2016-08-18 10:08:05,771 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471540085771,"tag":[],"qualifier":"","vlen":0}]},"row":"ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7."} 2016-08-18 10:08:05,772 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:05,773 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => b61ab1f232defc5aa4ae331a63c6cdd7, NAME => 'ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.', STARTKEY => '', ENDKEY => ''}] 2016-08-18 10:08:05,775 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(408): Removing 'ns2:table2_restore' from region states. 2016-08-18 10:08:05,776 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(412): Marking 'ns2:table2_restore' as deleted. 2016-08-18 10:08:05,776 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471540085776,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns2:table2_restore"} 2016-08-18 10:08:05,777 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:05,778 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1726): Deleted table ns2:table2_restore state from META 2016-08-18 10:08:05,888 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(340): Archiving region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. from FS 2016-08-18 10:08:05,888 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7 2016-08-18 10:08:05,891 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/recovered.edits] 2016-08-18 10:08:05,898 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/3ddc3cba34434d0cb7577b62195da637_SeqId_4_, to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/archive/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/3ddc3cba34434d0cb7577b62195da637_SeqId_4_ 2016-08-18 10:08:05,903 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/recovered.edits/6.seqid, to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/archive/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/recovered.edits/6.seqid 2016-08-18 10:08:05,903 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741916_1092 127.0.0.1:59389 2016-08-18 10:08:05,911 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7 2016-08-18 10:08:05,911 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(344): Table 'ns2:table2_restore' archived! 2016-08-18 10:08:05,913 INFO [IPC Server handler 7 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741915_1091 127.0.0.1:59389 2016-08-18 10:08:06,034 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741949_1125{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 290 2016-08-18 10:08:06,440 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:08:06,442 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns2:table2_restore 2016-08-18 10:08:06,452 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741950_1126{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:08:06,857 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:06,858 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.: disabling compactions & flushes 2016-08-18 10:08:06,858 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:06,858 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:06,972 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7."} 2016-08-18 10:08:06,974 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:06,975 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:08:07,082 INFO [ProcedureExecutor-5] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:08:07,083 ERROR [ProcedureExecutor-5] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:08:07,083 INFO [ProcedureExecutor-5] master.RegionStates(1106): Transition {b61ab1f232defc5aa4ae331a63c6cdd7 state=OFFLINE, ts=1471540087082, server=null} to {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_OPEN, ts=1471540087083, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:07,083 INFO [ProcedureExecutor-5] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:08:07,084 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:07,086 INFO [PriorityRpcServer.handler=1,queue=1,port=59399] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:07,091 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(6339): Opening region: {ENCODED => b61ab1f232defc5aa4ae331a63c6cdd7, NAME => 'ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:08:07,091 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore b61ab1f232defc5aa4ae331a63c6cdd7 2016-08-18 10:08:07,091 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:07,094 INFO [StoreOpener-b61ab1f232defc5aa4ae331a63c6cdd7-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:08:07,094 INFO [StoreOpener-b61ab1f232defc5aa4ae331a63c6cdd7-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:08:07,095 DEBUG [StoreOpener-b61ab1f232defc5aa4ae331a63c6cdd7-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f 2016-08-18 10:08:07,096 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7 2016-08-18 10:08:07,101 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:08:07,101 INFO [RS_OPEN_REGION-10.22.9.171:59399-1] regionserver.HRegion(871): Onlined b61ab1f232defc5aa4ae331a63c6cdd7; next sequenceid=2 2016-08-18 10:08:07,101 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:08:07,102 INFO [PostOpenDeployTasks:b61ab1f232defc5aa4ae331a63c6cdd7] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:07,103 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_OPEN, ts=1471540087083, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:08:07,103 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStates(1106): Transition {b61ab1f232defc5aa4ae331a63c6cdd7 state=PENDING_OPEN, ts=1471540087083, server=10.22.9.171,59399,1471539932874} to {b61ab1f232defc5aa4ae331a63c6cdd7 state=OPEN, ts=1471540087103, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:07,103 INFO [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:08:07,104 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:07,104 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.RegionStates(452): Onlined b61ab1f232defc5aa4ae331a63c6cdd7 on 10.22.9.171,59399,1471539932874 2016-08-18 10:08:07,104 DEBUG [ProcedureExecutor-5] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:08:07,105 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540087104,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 10:08:07,105 ERROR [B.defaultRpcServer.handler=4,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:08:07,105 DEBUG [PostOpenDeployTasks:b61ab1f232defc5aa4ae331a63c6cdd7] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:07,105 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:07,106 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-1] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. on 10.22.9.171,59399,1471539932874 2016-08-18 10:08:07,107 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-18 10:08:07,114 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741915_1091, blk_1073741916_1092] 2016-08-18 10:08:07,214 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(129): truncate 'ns2:table2_restore' completed 2016-08-18 10:08:07,319 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:593960000000002 2016-08-18 10:08:07,320 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 1.6660sec: TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=FINISHED 2016-08-18 10:08:07,430 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=22 2016-08-18 10:08:07,430 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns2:table2_restore completed 2016-08-18 10:08:07,431 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:08:07,431 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410025 2016-08-18 10:08:07,434 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:07,435 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-1789586232) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:07,435 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59842 because read count=-1. Number of active connections: 11 2016-08-18 10:08:07,435 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59843 because read count=-1. Number of active connections: 11 2016-08-18 10:08:07,435 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:59388; local cluster node: hdfs://localhost:59388 2016-08-18 10:08:07,436 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/archive/data/ns2/test-14715399571411 on local cluster, back it up before restore 2016-08-18 10:08:07,435 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (844181111) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:07,453 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741951_1127{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 8292 2016-08-18 10:08:07,860 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 10:08:07,861 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 10:08:07,880 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3 2016-08-18 10:08:07,880 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x90d70c4 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:07,885 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x90d70c40x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:07,886 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@134012a5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:07,886 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:07,887 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:07,887 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x90d70c4-0x1569e9d55410026 connected 2016-08-18 10:08:07,889 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:07,889 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59853; # active connections: 10 2016-08-18 10:08:07,889 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:07,890 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59853 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:07,895 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-18 10:08:07,905 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:08:07,905 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59854; # active connections: 11 2016-08-18 10:08:07,906 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:07,906 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59854 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:07,911 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:08:07,914 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e first=row0 last=row98 2016-08-18 10:08:07,918 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7., hostname=10.22.9.171,59399,1471539932874, seqNum=2 for row with hfile group [{[B@40fc3733,hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e}] 2016-08-18 10:08:07,919 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:07,919 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59855; # active connections: 7 2016-08-18 10:08:07,920 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:07,920 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59855 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:07,921 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(670): Validating hfile at hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e for inclusion in store f region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:07,924 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 10:08:07,924 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 10:08:07,926 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/994fb964b5294fcb91567fd0355f06fe_SeqId_4_ 2016-08-18 10:08:07,929 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(742): Loaded HFile hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e into store 'f' as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/994fb964b5294fcb91567fd0355f06fe_SeqId_4_ - updating store file list. 2016-08-18 10:08:07,935 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(777): Loaded HFile hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/994fb964b5294fcb91567fd0355f06fe_SeqId_4_ into store 'f 2016-08-18 10:08:07,935 INFO [B.defaultRpcServer.handler=4,queue=0,port=59399] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:59388/user/tyu/hbase-staging/restore/1147a0b47ba2d478b911f466b29f0fc3/f/9ab6388f101244b1aa56bfbffbdfea2e into store f (new location: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/994fb964b5294fcb91567fd0355f06fe_SeqId_4_) 2016-08-18 10:08:07,935 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:08:07,936 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:08:07,937 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410026 2016-08-18 10:08:07,937 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:07,938 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns2:test-14715399571411' to 'ns2:table2_restore' from log dirs: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:08:07,938 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-1024478311) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:08:07,938 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-2080576107) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:07,938 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-18068070) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:07,938 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59855 because read count=-1. Number of active connections: 7 2016-08-18 10:08:07,938 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59854 because read count=-1. Number of active connections: 11 2016-08-18 10:08:07,938 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59853 because read count=-1. Number of active connections: 11 2016-08-18 10:08:07,938 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x55255ce3 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:07,941 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x55255ce30x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:07,941 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b2535e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:07,941 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:07,941 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:07,942 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x55255ce3-0x1569e9d55410027 connected 2016-08-18 10:08:07,943 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:07,943 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59857; # active connections: 10 2016-08-18 10:08:07,944 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:07,944 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59857 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:07,945 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:59388/backupUT/backup_1471540016356/WALs from hbase tables ,ns2:test-14715399571411 to tables ,ns2:table2_restore 2016-08-18 10:08:07,945 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns2:test-14715399571411 into ns2:table2_restore 2016-08-18 10:08:07,946 DEBUG [main] mapreduce.WALPlayer(307): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945 from hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:08:07,947 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7cab2145 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:07,949 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7cab21450x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:07,950 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b8ee23, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:07,950 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:07,951 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:07,951 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7cab2145-0x1569e9d55410028 connected 2016-08-18 10:08:07,953 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:08:07,953 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59859; # active connections: 11 2016-08-18 10:08:07,954 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:07,954 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59859 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:07,955 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-18 10:08:07,956 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns2:test-14715399571411 2016-08-18 10:08:07,959 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:07,959 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59860; # active connections: 12 2016-08-18 10:08:07,960 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:07,960 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59860 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:07,963 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-18 10:08:07,963 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_043e5f3a-1626-4ef5-b3de-ec2fa1d729a1 2016-08-18 10:08:07,970 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741952_1128{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 153 2016-08-18 10:08:08,376 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:08:08,807 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0001_000001 (auth:SIMPLE) 2016-08-18 10:08:09,063 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-1494756467052002399.jar 2016-08-18 10:08:10,268 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:08:17,960 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-4991543173974779795.jar 2016-08-18 10:08:19,576 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-4903206121363257480.jar 2016-08-18 10:08:19,622 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-1660044731818611722.jar 2016-08-18 10:08:26,419 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3654584444487511593.jar 2016-08-18 10:08:26,420 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:08:26,420 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:08:26,420 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:08:26,421 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:08:26,421 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:08:26,421 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:08:26,629 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5007388141487787811.jar 2016-08-18 10:08:26,630 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5007388141487787811.jar 2016-08-18 10:08:27,826 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7275739537868936267.jar 2016-08-18 10:08:27,827 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5007388141487787811.jar 2016-08-18 10:08:27,827 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5007388141487787811.jar 2016-08-18 10:08:27,827 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7275739537868936267.jar 2016-08-18 10:08:27,828 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:08:27,828 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns2:test-14715399571411 output configured. 2016-08-18 10:08:27,828 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:08:27,828 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410028 2016-08-18 10:08:27,829 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:27,830 DEBUG [main] mapreduce.WALPlayer(324): success configuring load incremental job 2016-08-18 10:08:27,830 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (-63646844) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:27,830 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59859 because read count=-1. Number of active connections: 12 2016-08-18 10:08:27,830 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59860 because read count=-1. Number of active connections: 12 2016-08-18 10:08:27,830 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (-2038914599) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:27,831 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:08:27,871 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741953_1129{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:08:27,880 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741954_1130{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 662657 2016-08-18 10:08:28,298 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741955_1131{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 533455 2016-08-18 10:08:28,719 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741956_1132{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1351207 2016-08-18 10:08:29,153 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741957_1133{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:29,170 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741958_1134{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:29,177 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741959_1135{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 112558 2016-08-18 10:08:29,592 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741960_1136{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1475955 2016-08-18 10:08:30,010 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741961_1137{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:30,019 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741962_1138{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 792964 2016-08-18 10:08:30,445 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741963_1139{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 4516740 2016-08-18 10:08:30,859 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741964_1140{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 662657 2016-08-18 10:08:31,280 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741965_1141{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1795932 2016-08-18 10:08:31,697 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741966_1142{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 38156 2016-08-18 10:08:32,102 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:08:32,119 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:59388/backupUT/backup_1471540016356/WALs for WAL files 2016-08-18 10:08:32,122 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 10:08:32,122 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024240; access_time=1471540023826; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,122 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471540022532; access_time=1471540022117; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,123 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024666; access_time=1471540024253; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,123 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471540022966; access_time=1471540022551; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,123 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471540025094; access_time=1471540024679; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,123 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471540023391; access_time=1471540022979; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,123 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471540025521; access_time=1471540025107; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,123 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528; isDirectory=false; length=1196; replication=1; blocksize=134217728; modification_time=1471540023814; access_time=1471540023404; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:08:32,132 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741967_1143{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1647 2016-08-18 10:08:32,546 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741968_1144{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:32,577 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741969_1145{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 134681 2016-08-18 10:08:33,024 WARN [ResourceManager Event Processor] capacity.LeafQueue(632): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:08:33,024 WARN [ResourceManager Event Processor] capacity.LeafQueue(653): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:08:33,602 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:34,496 DEBUG [10.22.9.171,59399,1471539932874_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:08:34,856 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x493dff39 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:34,861 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x493dff390x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:34,862 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@439c2551, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:34,862 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:34,862 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:34,862 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x493dff39-0x1569e9d55410029 connected 2016-08-18 10:08:34,863 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 10:08:34,865 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:34,865 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59893; # active connections: 11 2016-08-18 10:08:34,866 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:34,866 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59893 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:34,874 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:34,874 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59894; # active connections: 7 2016-08-18 10:08:34,875 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:34,875 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59894 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:34,880 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:08:34,882 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:08:34,882 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:08:34,883 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:08:34,883 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:08:34,884 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:08:34,884 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:08:34,885 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:08:34,885 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:08:34,886 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:08:34,886 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:08:34,887 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:08:34,887 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:08:34,888 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:08:34,888 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410029 2016-08-18 10:08:34,889 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:34,890 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59894 because read count=-1. Number of active connections: 7 2016-08-18 10:08:34,890 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (-626665768) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:34,890 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (5271001) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:08:34,890 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59893 because read count=-1. Number of active connections: 11 2016-08-18 10:08:35,360 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:08:38,442 INFO [Socket Reader #1 for port 59477] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:38,700 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741970_1146{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:40,293 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:08:40,331 DEBUG [10.22.9.171,59441,1471539940207_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:08:40,602 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:08:40,603 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/meta 2016-08-18 10:08:40,603 DEBUG [region-location-4] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info 2016-08-18 10:08:40,603 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:08:40,603 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/session 2016-08-18 10:08:40,680 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:40,680 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:41,540 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:41,542 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:42,544 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:43,551 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:45,530 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:45,552 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000002 is : 143 2016-08-18 10:08:46,570 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:47,057 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:47,082 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000005 is : 143 2016-08-18 10:08:47,207 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:47,231 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000003 is : 143 2016-08-18 10:08:47,380 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:47,401 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000004 is : 143 2016-08-18 10:08:47,574 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:48,349 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:48,372 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000006 is : 143 2016-08-18 10:08:48,845 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:48,863 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000007 is : 143 2016-08-18 10:08:49,597 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:49,898 WARN [AsyncDispatcher event handler] containermanager.ContainerManagerImpl$ContainerEventDispatcher(1070): Event EventType: KILL_CONTAINER sent to absent container container_1471539956090_0002_01_000010 2016-08-18 10:08:50,591 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:50,608 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000008 is : 143 2016-08-18 10:08:51,140 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:51,155 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000009 is : 143 2016-08-18 10:08:54,227 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59962; # active connections: 11 2016-08-18 10:08:54,567 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:54,567 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59962 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:54,606 INFO [IPC Server handler 0 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741946_1122 127.0.0.1:59389 2016-08-18 10:08:54,776 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59962 because read count=-1. Number of active connections: 11 2016-08-18 10:08:55,151 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741946_1122] 2016-08-18 10:08:55,401 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741972_1148{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:08:55,428 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:08:55,442 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0002_01_000011 is : 143 2016-08-18 10:08:55,523 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741971_1147{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 16349 2016-08-18 10:08:55,532 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741973_1149{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:55,554 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741974_1150{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:55,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741975_1151{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:08:56,588 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741967_1143 127.0.0.1:59389 2016-08-18 10:08:56,588 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741968_1144 127.0.0.1:59389 2016-08-18 10:08:56,588 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741969_1145 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741971_1147 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741970_1146 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741965_1141 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741964_1140 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741966_1142 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741963_1139 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741956_1132 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741957_1133 127.0.0.1:59389 2016-08-18 10:08:56,589 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741954_1130 127.0.0.1:59389 2016-08-18 10:08:56,590 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741958_1134 127.0.0.1:59389 2016-08-18 10:08:56,590 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741953_1129 127.0.0.1:59389 2016-08-18 10:08:56,590 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741960_1136 127.0.0.1:59389 2016-08-18 10:08:56,590 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741959_1135 127.0.0.1:59389 2016-08-18 10:08:56,590 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741961_1137 127.0.0.1:59389 2016-08-18 10:08:56,590 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741955_1131 127.0.0.1:59389 2016-08-18 10:08:56,590 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741962_1138 127.0.0.1:59389 2016-08-18 10:08:57,258 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945 2016-08-18 10:08:57,259 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6f6f974f connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:57,263 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x6f6f974f0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:57,264 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@369f22db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:57,265 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:57,265 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:57,265 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x6f6f974f-0x1569e9d5541002b connected 2016-08-18 10:08:57,267 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:57,267 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59972; # active connections: 11 2016-08-18 10:08:57,267 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:57,268 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59972 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:57,273 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-18 10:08:57,275 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945/_SUCCESS 2016-08-18 10:08:57,280 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:08:57,280 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59973; # active connections: 12 2016-08-18 10:08:57,281 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:57,281 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59973 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:57,286 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:08:57,289 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945/f/22e1b37cd0da48ada0e5bc9469b51a85 first=row-t20 last=row98 2016-08-18 10:08:57,292 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7., hostname=10.22.9.171,59399,1471539932874, seqNum=2 for row with hfile group [{[B@26d8ac80,hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945/f/22e1b37cd0da48ada0e5bc9469b51a85}] 2016-08-18 10:08:57,293 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:57,293 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:59974; # active connections: 7 2016-08-18 10:08:57,294 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:57,294 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59974 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:57,294 INFO [B.defaultRpcServer.handler=0,queue=0,port=59399] regionserver.HStore(670): Validating hfile at hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945/f/22e1b37cd0da48ada0e5bc9469b51a85 for inclusion in store f region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:08:57,298 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59399] regionserver.HStore(682): HFile bounds: first=row-t20 last=row98 2016-08-18 10:08:57,298 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59399] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 10:08:57,300 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59399] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945/f/22e1b37cd0da48ada0e5bc9469b51a85 as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/95446d01462645d2a1e9cee81b4d71a2_SeqId_6_ 2016-08-18 10:08:57,300 INFO [B.defaultRpcServer.handler=0,queue=0,port=59399] regionserver.HStore(742): Loaded HFile hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945/f/22e1b37cd0da48ada0e5bc9469b51a85 into store 'f' as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/95446d01462645d2a1e9cee81b4d71a2_SeqId_6_ - updating store file list. 2016-08-18 10:08:57,306 INFO [B.defaultRpcServer.handler=0,queue=0,port=59399] regionserver.HStore(777): Loaded HFile hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/95446d01462645d2a1e9cee81b4d71a2_SeqId_6_ into store 'f 2016-08-18 10:08:57,306 INFO [B.defaultRpcServer.handler=0,queue=0,port=59399] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471540087945/f/22e1b37cd0da48ada0e5bc9469b51a85 into store f (new location: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/f/95446d01462645d2a1e9cee81b4d71a2_SeqId_6_) 2016-08-18 10:08:57,306 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:08:57,309 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:08:57,309 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541002b 2016-08-18 10:08:57,310 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:57,311 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-18 10:08:57,311 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59974 because read count=-1. Number of active connections: 7 2016-08-18 10:08:57,311 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410027 2016-08-18 10:08:57,311 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59972 because read count=-1. Number of active connections: 12 2016-08-18 10:08:57,311 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (1205159360) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:08:57,311 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59973 because read count=-1. Number of active connections: 12 2016-08-18 10:08:57,311 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-630276592) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:57,311 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (570990208) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:57,312 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:08:57,312 INFO [main] impl.RestoreClientImpl(292): ns2:test-14715399571411 has been successfully restored to ns2:table2_restore 2016-08-18 10:08:57,312 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:08:57,312 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns2/test-14715399571411/ 2016-08-18 10:08:57,313 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471540016356 hdfs://localhost:59388/backupUT/backup_1471540016356/ns2/test-14715399571411/ 2016-08-18 10:08:57,312 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59857 because read count=-1. Number of active connections: 10 2016-08-18 10:08:57,312 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-110549907) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:08:57,313 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:08:57,314 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:08:57,317 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:08:57,317 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:08:57,317 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14715399571412' to 'ns3:table3_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412 2016-08-18 10:08:57,323 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/archive/data/ns3/test-14715399571412 does not exists 2016-08-18 10:08:57,324 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14715399571412, will only create table 2016-08-18 10:08:57,324 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7bf61fec connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:08:57,326 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7bf61fec0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:08:57,326 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c94f32d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:08:57,327 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:08:57,327 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:08:57,327 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7bf61fec-0x1569e9d5541002c connected 2016-08-18 10:08:57,329 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:08:57,329 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59979; # active connections: 10 2016-08-18 10:08:57,330 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:57,330 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59979 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:57,330 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns3:table3_restore', preserving region splits 2016-08-18 10:08:57,332 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:08:57,332 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59980; # active connections: 11 2016-08-18 10:08:57,333 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:08:57,333 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59980 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:08:57,333 INFO [main] client.HBaseAdmin$10(780): Started disable of ns3:table3_restore 2016-08-18 10:08:57,334 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns3:table3_restore 2016-08-18 10:08:57,437 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-18 10:08:57,440 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 10:08:57,440 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:593960000000001 2016-08-18 10:08:57,542 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 10:08:57,651 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540137651,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 10:08:57,653 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:57,654 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLING in META 2016-08-18 10:08:57,748 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 10:08:57,759 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-18 10:08:57,760 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. (offlining), current state: {36ac3931d4f13816604ff9289aebc876 state=OPEN, ts=1471540041541, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:57,761 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {36ac3931d4f13816604ff9289aebc876 state=OPEN, ts=1471540041541, server=10.22.9.171,59399,1471539932874} to {36ac3931d4f13816604ff9289aebc876 state=PENDING_CLOSE, ts=1471540137761, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:57,761 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. with state=PENDING_CLOSE 2016-08-18 10:08:57,761 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:57,763 INFO [PriorityRpcServer.handler=2,queue=0,port=59399] regionserver.RSRpcServices(1314): Close 36ac3931d4f13816604ff9289aebc876, moving to null 2016-08-18 10:08:57,763 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:57,764 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.: disabling compactions & flushes 2016-08-18 10:08:57,764 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,59399,1471539932874 for region ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:57,764 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:57,765 INFO [StoreCloserThread-ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.-1] regionserver.HStore(839): Closed f 2016-08-18 10:08:57,765 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:08:57,771 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 10:08:57,773 INFO [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:57,774 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.AssignmentManager(2884): Got transition CLOSED for {36ac3931d4f13816604ff9289aebc876 state=PENDING_CLOSE, ts=1471540137761, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:08:57,774 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.RegionStates(1106): Transition {36ac3931d4f13816604ff9289aebc876 state=PENDING_CLOSE, ts=1471540137761, server=10.22.9.171,59399,1471539932874} to {36ac3931d4f13816604ff9289aebc876 state=OFFLINE, ts=1471540137774, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:08:57,774 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. with state=OFFLINE 2016-08-18 10:08:57,775 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:57,776 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.RegionStates(590): Offlined 36ac3931d4f13816604ff9289aebc876 from 10.22.9.171,59399,1471539932874 2016-08-18 10:08:57,776 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:57,922 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540137921,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 10:08:57,923 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:57,924 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLED in META 2016-08-18 10:08:57,924 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(424): Disabled table, ns3:table3_restore, is completed. 2016-08-18 10:08:58,053 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 10:08:58,140 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:593960000000001 2016-08-18 10:08:58,140 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 698msec: DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=FINISHED 2016-08-18 10:08:58,152 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741953_1129, blk_1073741954_1130, blk_1073741955_1131, blk_1073741956_1132, blk_1073741957_1133, blk_1073741958_1134, blk_1073741959_1135, blk_1073741960_1136, blk_1073741961_1137, blk_1073741962_1138, blk_1073741963_1139, blk_1073741964_1140, blk_1073741965_1141, blk_1073741966_1142, blk_1073741967_1143, blk_1073741968_1144, blk_1073741969_1145, blk_1073741970_1146, blk_1073741971_1147] 2016-08-18 10:08:58,558 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 10:08:58,558 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns3:table3_restore completed 2016-08-18 10:08:58,560 INFO [main] client.HBaseAdmin$8(615): Started truncating ns3:table3_restore 2016-08-18 10:08:58,561 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns3:table3_restore 2016-08-18 10:08:58,664 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:08:58,667 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:593960000000002 2016-08-18 10:08:58,668 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(87): waiting for 'ns3:table3_restore' regions in transition 2016-08-18 10:08:58,777 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471540138777,"tag":[],"qualifier":"","vlen":0}]},"row":"ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876."} 2016-08-18 10:08:58,778 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:58,779 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 36ac3931d4f13816604ff9289aebc876, NAME => 'ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.', STARTKEY => '', ENDKEY => ''}] 2016-08-18 10:08:58,781 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(408): Removing 'ns3:table3_restore' from region states. 2016-08-18 10:08:58,785 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(412): Marking 'ns3:table3_restore' as deleted. 2016-08-18 10:08:58,785 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471540138785,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns3:table3_restore"} 2016-08-18 10:08:58,786 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:58,787 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1726): Deleted table ns3:table3_restore state from META 2016-08-18 10:08:58,898 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(340): Archiving region ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. from FS 2016-08-18 10:08:58,898 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876 2016-08-18 10:08:58,901 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/recovered.edits] 2016-08-18 10:08:58,909 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/recovered.edits/4.seqid, to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/archive/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/recovered.edits/4.seqid 2016-08-18 10:08:58,910 INFO [IPC Server handler 6 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741919_1095 127.0.0.1:59389 2016-08-18 10:08:58,913 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876 2016-08-18 10:08:58,913 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(344): Table 'ns3:table3_restore' archived! 2016-08-18 10:08:58,915 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741918_1094 127.0.0.1:59389 2016-08-18 10:08:59,036 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741976_1152{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 291 2016-08-18 10:08:59,445 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:08:59,447 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns3:table3_restore 2016-08-18 10:08:59,456 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741977_1153{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:08:59,860 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:59,861 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.: disabling compactions & flushes 2016-08-18 10:08:59,862 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:59,862 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:08:59,970 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876."} 2016-08-18 10:08:59,971 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:08:59,972 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:09:00,078 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:09:00,079 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:09:00,079 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {36ac3931d4f13816604ff9289aebc876 state=OFFLINE, ts=1471540140078, server=null} to {36ac3931d4f13816604ff9289aebc876 state=PENDING_OPEN, ts=1471540140079, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:09:00,079 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:09:00,080 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:09:00,082 INFO [PriorityRpcServer.handler=3,queue=1,port=59399] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:09:00,087 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(6339): Opening region: {ENCODED => 36ac3931d4f13816604ff9289aebc876, NAME => 'ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:09:00,087 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore 36ac3931d4f13816604ff9289aebc876 2016-08-18 10:09:00,087 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:09:00,090 INFO [StoreOpener-36ac3931d4f13816604ff9289aebc876-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:09:00,090 INFO [StoreOpener-36ac3931d4f13816604ff9289aebc876-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:09:00,091 DEBUG [StoreOpener-36ac3931d4f13816604ff9289aebc876-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/f 2016-08-18 10:09:00,092 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876 2016-08-18 10:09:00,096 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:09:00,096 INFO [RS_OPEN_REGION-10.22.9.171:59399-2] regionserver.HRegion(871): Onlined 36ac3931d4f13816604ff9289aebc876; next sequenceid=2 2016-08-18 10:09:00,096 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:09:00,097 INFO [PostOpenDeployTasks:36ac3931d4f13816604ff9289aebc876] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:09:00,098 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {36ac3931d4f13816604ff9289aebc876 state=PENDING_OPEN, ts=1471540140079, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:09:00,098 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStates(1106): Transition {36ac3931d4f13816604ff9289aebc876 state=PENDING_OPEN, ts=1471540140079, server=10.22.9.171,59399,1471539932874} to {36ac3931d4f13816604ff9289aebc876 state=OPEN, ts=1471540140098, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:09:00,098 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:09:00,098 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:09:00,099 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.RegionStates(452): Onlined 36ac3931d4f13816604ff9289aebc876 on 10.22.9.171,59399,1471539932874 2016-08-18 10:09:00,099 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:09:00,099 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540140099,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 10:09:00,099 ERROR [B.defaultRpcServer.handler=1,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:09:00,099 DEBUG [PostOpenDeployTasks:36ac3931d4f13816604ff9289aebc876] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:09:00,100 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:09:00,100 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-2] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. on 10.22.9.171,59399,1471539932874 2016-08-18 10:09:00,101 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-18 10:09:00,206 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(129): truncate 'ns3:table3_restore' completed 2016-08-18 10:09:00,317 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:593960000000002 2016-08-18 10:09:00,317 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 1.6450sec: TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=FINISHED 2016-08-18 10:09:00,436 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=24 2016-08-18 10:09:00,436 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns3:table3_restore completed 2016-08-18 10:09:00,436 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:09:00,436 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541002c 2016-08-18 10:09:00,439 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:09:00,440 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns3:test-14715399571412' to 'ns3:table3_restore' from log dirs: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:00,440 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59979 because read count=-1. Number of active connections: 11 2016-08-18 10:09:00,440 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (11613642) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:00,440 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59980 because read count=-1. Number of active connections: 11 2016-08-18 10:09:00,440 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (1658057712) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:00,441 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x48a3f5ea connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:00,444 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x48a3f5ea0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:00,444 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cf62285, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:09:00,445 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:09:00,445 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:09:00,445 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x48a3f5ea-0x1569e9d5541002d connected 2016-08-18 10:09:00,447 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:00,447 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59986; # active connections: 10 2016-08-18 10:09:00,450 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:00,450 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59986 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:00,451 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:59388/backupUT/backup_1471540016356/WALs from hbase tables ,ns3:test-14715399571412 to tables ,ns3:table3_restore 2016-08-18 10:09:00,451 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns3:test-14715399571412 into ns3:table3_restore 2016-08-18 10:09:00,453 DEBUG [main] mapreduce.WALPlayer(307): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471540140451 from hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:00,453 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x94037d5 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:00,455 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x94037d50x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:00,456 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19628d15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:09:00,456 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:09:00,456 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:09:00,457 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x94037d5-0x1569e9d5541002e connected 2016-08-18 10:09:00,458 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:09:00,458 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59988; # active connections: 11 2016-08-18 10:09:00,458 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:00,459 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59988 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:00,460 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-18 10:09:00,460 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns3:test-14715399571412 2016-08-18 10:09:00,462 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:00,463 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:59989; # active connections: 12 2016-08-18 10:09:00,463 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:00,463 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 59989 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:00,466 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-18 10:09:00,466 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_db8fb14f-b899-4f64-870e-61ce9e9640a7 2016-08-18 10:09:00,471 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741978_1154{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 153 2016-08-18 10:09:00,878 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:09:01,156 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741918_1094, blk_1073741919_1095] 2016-08-18 10:09:01,616 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-2008250301113189851.jar 2016-08-18 10:09:01,759 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0002_000001 (auth:SIMPLE) 2016-08-18 10:09:03,282 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:09:10,451 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3091793285218094542.jar 2016-08-18 10:09:12,077 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7642132032138029456.jar 2016-08-18 10:09:12,119 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-5751257598956167358.jar 2016-08-18 10:09:18,928 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-6409759407599158344.jar 2016-08-18 10:09:18,929 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:09:18,929 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:09:18,929 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:09:18,929 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:09:18,930 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:09:18,930 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:09:19,129 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3744797255312927217.jar 2016-08-18 10:09:19,129 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3744797255312927217.jar 2016-08-18 10:09:20,294 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3108831963689178725.jar 2016-08-18 10:09:20,294 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3744797255312927217.jar 2016-08-18 10:09:20,295 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3744797255312927217.jar 2016-08-18 10:09:20,295 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-3108831963689178725.jar 2016-08-18 10:09:20,295 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:09:20,296 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns3:test-14715399571412 output configured. 2016-08-18 10:09:20,296 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:09:20,296 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541002e 2016-08-18 10:09:20,296 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:09:20,298 DEBUG [main] mapreduce.WALPlayer(324): success configuring load incremental job 2016-08-18 10:09:20,298 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59988 because read count=-1. Number of active connections: 12 2016-08-18 10:09:20,298 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (397560792) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:20,298 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59989 because read count=-1. Number of active connections: 12 2016-08-18 10:09:20,298 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-1354154769) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:20,298 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:09:20,336 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741979_1155{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1556922 2016-08-18 10:09:20,751 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741980_1156{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 533455 2016-08-18 10:09:21,176 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741981_1157{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 4516740 2016-08-18 10:09:21,588 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741982_1158{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 112558 2016-08-18 10:09:22,005 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741983_1159{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 662657 2016-08-18 10:09:22,422 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741984_1160{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1475955 2016-08-18 10:09:22,834 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741985_1161{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 38156 2016-08-18 10:09:23,252 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741986_1162{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 2057506 2016-08-18 10:09:23,670 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741987_1163{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1351207 2016-08-18 10:09:24,096 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741988_1164{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 4669607 2016-08-18 10:09:24,510 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741989_1165{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 662657 2016-08-18 10:09:24,929 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741990_1166{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:09:24,941 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741991_1167{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:09:24,960 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741992_1168{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 4516740 2016-08-18 10:09:25,366 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:09:25,382 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:59388/backupUT/backup_1471540016356/WALs for WAL files 2016-08-18 10:09:25,385 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 10:09:25,385 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024240; access_time=1471540023826; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,385 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471540022532; access_time=1471540022117; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,385 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024666; access_time=1471540024253; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,385 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471540022966; access_time=1471540022551; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,386 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471540025094; access_time=1471540024679; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,386 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471540023391; access_time=1471540022979; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,386 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471540025521; access_time=1471540025107; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,386 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528; isDirectory=false; length=1196; replication=1; blocksize=134217728; modification_time=1471540023814; access_time=1471540023404; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:25,393 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741993_1169{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1647 2016-08-18 10:09:25,806 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741994_1170{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 59 2016-08-18 10:09:26,225 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741995_1171{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 134682 2016-08-18 10:09:26,669 WARN [ResourceManager Event Processor] capacity.LeafQueue(632): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:09:26,669 WARN [ResourceManager Event Processor] capacity.LeafQueue(653): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:09:26,839 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:31,649 INFO [Socket Reader #1 for port 59477] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:31,910 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741996_1172{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:09:33,866 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:33,866 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:34,489 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:09:34,491 DEBUG [10.22.9.171,59399,1471539932874_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:09:34,733 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:34,734 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:34,878 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x382ec902 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:34,881 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x382ec9020x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:34,882 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ccbdd1d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:09:34,882 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:09:34,882 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:09:34,883 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 10:09:34,883 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x382ec902-0x1569e9d5541002f connected 2016-08-18 10:09:34,886 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:34,886 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60048; # active connections: 11 2016-08-18 10:09:34,887 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:34,888 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60048 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:34,892 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:34,892 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:60049; # active connections: 7 2016-08-18 10:09:34,893 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:34,893 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60049 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:34,896 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:09:34,897 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:09:34,897 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:09:34,899 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:09:34,899 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:09:34,900 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:09:34,900 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:09:34,901 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:09:34,901 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:09:34,902 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:09:34,903 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:09:34,904 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:09:34,904 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:09:34,905 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:09:34,905 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541002f 2016-08-18 10:09:34,906 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:09:34,907 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60048 because read count=-1. Number of active connections: 11 2016-08-18 10:09:34,907 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-578407500) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:09:34,907 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:60049 because read count=-1. Number of active connections: 7 2016-08-18 10:09:34,907 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1919292283) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:35,738 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:36,748 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:39,154 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:39,177 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000002 is : 143 2016-08-18 10:09:39,774 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:40,075 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:40,101 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000003 is : 143 2016-08-18 10:09:40,127 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:40,148 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:40,153 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000004 is : 143 2016-08-18 10:09:40,169 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000005 is : 143 2016-08-18 10:09:40,293 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:09:40,296 DEBUG [10.22.9.171,59441,1471539940207_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:09:40,603 DEBUG [region-location-4] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info 2016-08-18 10:09:40,603 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:09:40,603 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/meta 2016-08-18 10:09:40,604 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:09:40,604 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/session 2016-08-18 10:09:40,780 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:41,289 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:41,309 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000006 is : 143 2016-08-18 10:09:41,945 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:41,964 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000007 is : 143 2016-08-18 10:09:42,803 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:43,175 WARN [AsyncDispatcher event handler] containermanager.ContainerManagerImpl$ContainerEventDispatcher(1070): Event EventType: KILL_CONTAINER sent to absent container container_1471539956090_0003_01_000010 2016-08-18 10:09:43,728 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:43,749 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000008 is : 143 2016-08-18 10:09:44,376 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:44,391 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000009 is : 143 2016-08-18 10:09:46,026 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:46,042 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0003_01_000011 is : 143 2016-08-18 10:09:46,068 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741997_1173{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 16349 2016-08-18 10:09:46,078 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741998_1174{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:09:46,100 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741999_1175{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:09:46,116 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742000_1176{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:09:47,135 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741993_1169 127.0.0.1:59389 2016-08-18 10:09:47,135 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741994_1170 127.0.0.1:59389 2016-08-18 10:09:47,135 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741995_1171 127.0.0.1:59389 2016-08-18 10:09:47,135 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741997_1173 127.0.0.1:59389 2016-08-18 10:09:47,135 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741996_1172 127.0.0.1:59389 2016-08-18 10:09:47,135 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741991_1167 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741989_1165 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741988_1164 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741992_1168 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741983_1159 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741985_1161 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741981_1157 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741987_1163 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741979_1155 127.0.0.1:59389 2016-08-18 10:09:47,136 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741984_1160 127.0.0.1:59389 2016-08-18 10:09:47,137 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741982_1158 127.0.0.1:59389 2016-08-18 10:09:47,137 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741986_1162 127.0.0.1:59389 2016-08-18 10:09:47,137 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741980_1156 127.0.0.1:59389 2016-08-18 10:09:47,137 INFO [IPC Server handler 3 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741990_1166 127.0.0.1:59389 2016-08-18 10:09:47,964 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471540140451 2016-08-18 10:09:47,965 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xbebc548 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:47,969 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xbebc5480x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:47,973 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c9684a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:09:47,974 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:09:47,974 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xbebc548-0x1569e9d55410030 connected 2016-08-18 10:09:47,974 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:09:47,976 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:47,976 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60109; # active connections: 11 2016-08-18 10:09:47,977 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:47,977 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60109 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:47,983 DEBUG [main] client.ConnectionImplementation(604): Table ns3:table3_restore should be available 2016-08-18 10:09:47,986 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471540140451/_SUCCESS 2016-08-18 10:09:47,987 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:09:47,987 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60111; # active connections: 12 2016-08-18 10:09:47,988 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:47,988 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60111 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:47,989 WARN [main] mapreduce.LoadIncrementalHFiles(350): Bulk load operation did not find any files to load in directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471540140451. Does it contain files in subdirectories that correspond to column family names? 2016-08-18 10:09:47,990 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:09:47,990 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410030 2016-08-18 10:09:47,990 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:09:47,991 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-18 10:09:47,991 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60109 because read count=-1. Number of active connections: 12 2016-08-18 10:09:47,991 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541002d 2016-08-18 10:09:47,991 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (208654180) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:47,991 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60111 because read count=-1. Number of active connections: 12 2016-08-18 10:09:47,991 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (16752944) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:47,991 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:09:47,992 INFO [main] impl.RestoreClientImpl(292): ns3:test-14715399571412 has been successfully restored to ns3:table3_restore 2016-08-18 10:09:47,992 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:09:47,992 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59986 because read count=-1. Number of active connections: 10 2016-08-18 10:09:47,992 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-111432179) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:47,992 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns3/test-14715399571412/ 2016-08-18 10:09:47,992 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471540016356 hdfs://localhost:59388/backupUT/backup_1471540016356/ns3/test-14715399571412/ 2016-08-18 10:09:47,992 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-18 10:09:47,992 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1471539957141, ns2:test-14715399571411, ns3:test-14715399571412] are successful! 2016-08-18 10:09:48,032 INFO [main] util.BackupClientUtil(105): Using existing backup root dir: hdfs://localhost:59388/backupUT 2016-08-18 10:09:48,034 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-18 10:09:48,035 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.HMaster(2641): Incremental backup for the following table set: [ns3:test-14715399571412, ns4:test-14715399571413, ns1:test-1471539957141, ns2:test-14715399571411] 2016-08-18 10:09:48,038 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3533aa4b connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:48,040 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3533aa4b0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:48,041 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75f998dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:09:48,041 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:09:48,041 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:09:48,041 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] backup.BackupInfo(125): CreateBackupContext: 4 ns3:test-14715399571412 2016-08-18 10:09:48,042 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3533aa4b-0x1569e9d55410031 connected 2016-08-18 10:09:48,146 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:59388/backupUT; backupId=backup_1471540188034; tables=ns3:test-14715399571412,ns4:test-14715399571413,ns1:test-1471539957141,ns2:test-14715399571411) id=25 state=RUNNABLE:PREPARE_INCREMENTAL added to the store. 2016-08-18 10:09:48,149 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:48,150 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:593960000000003 2016-08-18 10:09:48,150 INFO [ProcedureExecutor-1] master.FullTableBackupProcedure(130): Backup backup_1471540188034 started at 1471540188150. 2016-08-18 10:09:48,150 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471540188034 set status=RUNNING 2016-08-18 10:09:48,153 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:48,153 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60117; # active connections: 10 2016-08-18 10:09:48,154 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:48,154 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60117 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:48,158 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:48,158 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:60118; # active connections: 7 2016-08-18 10:09:48,158 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:48,159 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60118 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:48,159 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:09:48,161 DEBUG [ProcedureExecutor-1] master.FullTableBackupProcedure(134): Backup session backup_1471540188034 has been started. 2016-08-18 10:09:48,161 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-18 10:09:48,162 DEBUG [ProcedureExecutor-1] master.IncrementalTableBackupProcedure(216): For incremental backup, current table set is [ns3:test-14715399571412, ns4:test-14715399571413, ns1:test-1471539957141, ns2:test-14715399571411] 2016-08-18 10:09:48,162 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(180): read backup start code from hbase:backup 2016-08-18 10:09:48,163 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:59388/backupUT 2016-08-18 10:09:48,165 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(93): StartCode 1471539968108for backupID backup_1471540188034 2016-08-18 10:09:48,165 INFO [ProcedureExecutor-1] impl.IncrementalBackupManager(104): Execute roll log procedure for incremental backup ... 2016-08-18 10:09:48,167 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:09:48,167 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60119; # active connections: 11 2016-08-18 10:09:48,167 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:48,167 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60119 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:48,168 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(652): Client=tyu//10.22.9.171 procedure request for: rolllog-proc 2016-08-18 10:09:48,168 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2016-08-18 10:09:48,169 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2016-08-18 10:09:48,169 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:09:48,169 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2016-08-18 10:09:48,169 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2016-08-18 10:09:48,169 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:09:48,169 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:09:48,170 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,170 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:09:48,170 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:09:48,170 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:09:48,170 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:09:48,170 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:09:48,170 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,170 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,170 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:09:48,171 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 10:09:48,171 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,171 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 10:09:48,171 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 10:09:48,171 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:09:48,171 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 10:09:48,171 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 10:09:48,171 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:09:48,171 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 10:09:48,171 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 10:09:48,171 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 10:09:48,171 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 10:09:48,172 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 10:09:48,172 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 10:09:48,172 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 10:09:48,172 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 10:09:48,172 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 10:09:48,172 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:09:48,172 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 10:09:48,175 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 10:09:48,175 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 10:09:48,175 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 10:09:48,175 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 10:09:48,175 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 10:09:48,175 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 10:09:48,175 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 10:09:48,177 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,177 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,177 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,177 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,177 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,177 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,177 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:09:48,177 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:09:48,177 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,177 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:09:48,177 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,177 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 10:09:48,178 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:09:48,178 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:48,178 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59399,1471539932874' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@64b4b346[Count = 1] remaining members to acquire global barrier 2016-08-18 10:09:48,179 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,179 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,179 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:09:48,180 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:09:48,180 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:09:48,180 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:48,180 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,180 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,181 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:09:48,181 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:09:48,181 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,59396,1471539932179' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 10:09:48,181 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@64b4b346[Count = 0] remaining members to acquire global barrier 2016-08-18 10:09:48,181 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2016-08-18 10:09:48,181 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,182 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 10:09:48,182 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 10:09:48,182 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,182 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:09:48,182 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,182 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool32-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,182 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool31-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,182 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 10:09:48,182 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:09:48,182 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 10:09:48,182 INFO [rs(10.22.9.171,59399,1471539932874)-backup-pool31-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471540017355 on 10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,182 INFO [rs(10.22.9.171,59396,1471539932179)-backup-pool32-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471540016518 on 10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,182 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 10:09:48,183 DEBUG [master//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 10:09:48,183 DEBUG [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 10:09:48,183 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:09:48,183 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:48,183 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:48,185 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:48,185 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:09:48,185 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:09:48,186 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:48,186 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540188183 2016-08-18 10:09:48,186 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:48,186 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:09:48,190 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:09:48,190 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:09:48,191 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:09:48,191 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:09:48,195 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741886_1062{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:09:48,196 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741885_1061{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1615 2016-08-18 10:09:48,253 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:48,456 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:48,601 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540188183 2016-08-18 10:09:48,602 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 with entries=6, filesize=1.58 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:09:48,602 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:09:48,603 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:09:48,609 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 2016-08-18 10:09:48,609 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:09:48,615 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:09:48,615 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:09:48,616 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:09:48,617 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:09:48,623 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741887_1063{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:09:48,623 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741888_1064{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1615 2016-08-18 10:09:48,760 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:49,029 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 with entries=6, filesize=1.58 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:09:49,029 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 2016-08-18 10:09:49,030 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:09:49,030 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:09:49,034 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540189032 2016-08-18 10:09:49,038 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:09:49,039 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:09:49,039 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool32-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 856 2016-08-18 10:09:49,039 INFO [rs(10.22.9.171,59396,1471539932179)-backup-pool32-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471540188183 on 10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,039 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool32-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:09:49,043 DEBUG [rs(10.22.9.171,59396,1471539932179)-backup-pool32-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 10:09:49,044 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741889_1065{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 934 2016-08-18 10:09:49,044 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:09:49,045 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 10:09:49,045 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,59396,1471539932179' in zk 2016-08-18 10:09:49,046 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,046 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,046 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 10:09:49,046 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,046 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:09:49,046 DEBUG [member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 10:09:49,046 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,047 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:09:49,047 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:09:49,047 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:09:49,047 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:49,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:09:49,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:09:49,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:49,049 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,049 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,59396,1471539932179': 2016-08-18 10:09:49,049 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59396,1471539932179' released barrier for procedure'rolllog', counting down latch. Waiting for 1 more 2016-08-18 10:09:49,185 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741984_1160, blk_1073741985_1161, blk_1073741986_1162, blk_1073741987_1163, blk_1073741988_1164, blk_1073741989_1165, blk_1073741990_1166, blk_1073741991_1167, blk_1073741992_1168, blk_1073741993_1169, blk_1073741994_1170, blk_1073741995_1171, blk_1073741996_1172, blk_1073741997_1173, blk_1073741979_1155, blk_1073741980_1156, blk_1073741981_1157, blk_1073741982_1158, blk_1073741983_1159] 2016-08-18 10:09:49,267 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:49,451 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 with entries=3, filesize=934 B; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540189032 2016-08-18 10:09:49,453 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:09:49,459 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:49,463 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:09:49,464 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:09:49,478 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741890_1066{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 7821 2016-08-18 10:09:49,886 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 with entries=9, filesize=7.64 KB; new WAL /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:49,891 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool31-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 1708 2016-08-18 10:09:49,891 INFO [rs(10.22.9.171,59399,1471539932874)-backup-pool31-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471540189032 on 10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,891 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool31-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:09:49,893 DEBUG [rs(10.22.9.171,59399,1471539932874)-backup-pool31-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 10:09:49,899 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:49,900 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 10:09:49,900 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,59399,1471539932874' in zk 2016-08-18 10:09:49,903 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,903 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 10:09:49,903 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:09:49,903 DEBUG [member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 10:09:49,903 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,904 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,904 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,904 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:09:49,904 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:09:49,904 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:09:49,905 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:49,905 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,905 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,906 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:09:49,906 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:09:49,906 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:49,906 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,907 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,907 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,59399,1471539932874': 2016-08-18 10:09:49,907 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,59399,1471539932874' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2016-08-18 10:09:49,907 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2016-08-18 10:09:49,907 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 10:09:49,907 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 10:09:49,907 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:rolllog 2016-08-18 10:09:49,907 INFO [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-18 10:09:49,908 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,908 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,908 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,908 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,908 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,908 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,909 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:09:49,909 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,909 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,909 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 10:09:49,909 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 10:09:49,909 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:09:49,909 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:09:49,909 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,909 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 10:09:49,909 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 10:09:49,910 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:49,910 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,910 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,910 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,910 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 10:09:49,911 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,911 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:49,911 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 10:09:49,911 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 10:09:49,911 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,912 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,912 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:09:49,912 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:09:49,912 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:09:49,912 DEBUG [(10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 10:09:49,912 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.LogRollMasterProcedureManager(116): Done waiting - exec procedure for rolllog 2016-08-18 10:09:49,913 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.LogRollMasterProcedureManager(117): Distributed roll log procedure is successful! 2016-08-18 10:09:49,912 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:09:49,913 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:09:49,913 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:09:49,913 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Unable to get data of znode /1/rolllog-proc/abort/rolllog because node does not exist (not an error) 2016-08-18 10:09:49,913 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 10:09:49,913 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 10:09:49,913 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 10:09:49,913 DEBUG [ProcedureExecutor-1] client.HBaseAdmin(2481): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 857 ms per retry) 2016-08-18 10:09:49,913 DEBUG [ProcedureExecutor-1] client.HBaseAdmin(2490): (#1) Sleeping: 100ms while waiting for procedure completion. 2016-08-18 10:09:49,913 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 10:09:49,914 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 10:09:49,914 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59396,1471539932179 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,59399,1471539932874 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 10:09:49,914 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 10:09:50,018 DEBUG [ProcedureExecutor-1] client.HBaseAdmin(2496): Getting current status of procedure from master... 2016-08-18 10:09:50,020 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(904): Checking to see if procedure from request:rolllog-proc is done 2016-08-18 10:09:50,020 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 10:09:50,024 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(215): In getLogFilesForNewBackup() olderTimestamps: {10.22.9.171:59399=1471539968543, 10.22.9.171:59396=1471539968108} newestTimestamps: {10.22.9.171:59399=1471540017355, 10.22.9.171:59396=1471540016518} 2016-08-18 10:09:50,026 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540188183 2016-08-18 10:09:50,026 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 2016-08-18 10:09:50,026 WARN [ProcedureExecutor-1] wal.DefaultWALProvider(349): Cannot parse a server name from path=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta; Not a host:port pair: 10.22.9.171,59396,1471539932179.meta 2016-08-18 10:09:50,026 WARN [ProcedureExecutor-1] util.BackupServerUtil(237): Skip log file (can't parse): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta 2016-08-18 10:09:50,027 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540189032 2016-08-18 10:09:50,027 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:09:50,028 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(276): not excluding hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 1471539968961 <= 1471540017355 2016-08-18 10:09:50,028 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540017778 2016-08-18 10:09:50,028 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:50,028 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:09:50,028 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 1471539936418 <= 1471539968108 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 1471539968108 <= 1471539968108 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(325): newest log hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 host: 10.22.9.171:59396 newTimestamp: 1471540016518 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 1471539936418 <= 1471539968543 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 1471539968543 <= 1471539968543 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 1471539960721 <= 1471539968543 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 1471539962152 <= 1471539968543 2016-08-18 10:09:50,029 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(500): get WAL files from hbase:backup 2016-08-18 10:09:50,033 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471539967737/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:09:50,033 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:09:50,033 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974 2016-08-18 10:09:50,033 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471539967737/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:09:50,033 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:59388/backupUT/backup_1471540016356/hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] backup.BackupInfo(313): setting incr backup file list 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:09:50,034 DEBUG [ProcedureExecutor-1] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:09:50,035 DEBUG [ProcedureExecutor-1] backup.BackupInfo(315): hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:09:50,142 INFO [ProcedureExecutor-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x19936382 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:50,148 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x199363820x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:50,150 DEBUG [ProcedureExecutor-1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b93351a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:09:50,153 DEBUG [ProcedureExecutor-1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:09:50,153 DEBUG [ProcedureExecutor-1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:09:50,153 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x19936382-0x1569e9d55410032 connected 2016-08-18 10:09:50,155 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:09:50,155 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60128; # active connections: 12 2016-08-18 10:09:50,156 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:09:50,156 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60128 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:09:50,157 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(175): Attempting to copy table info for:ns1:test-1471539957141 2016-08-18 10:09:50,167 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742007_1183{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 294 2016-08-18 10:09:50,273 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:50,576 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540188034/ns1/test-1471539957141/.tabledesc/.tableinfo.0000000001 2016-08-18 10:09:50,577 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:09:50,577 INFO [ProcedureExecutor-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x19936382 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:50,582 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x199363820x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:50,584 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(188): Starting to write region info for table ns1:test-1471539957141 2016-08-18 10:09:50,584 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x19936382-0x1569e9d55410033 connected 2016-08-18 10:09:50,591 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742008_1184{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 49 2016-08-18 10:09:50,994 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(197): Finished writing region info for table ns1:test-1471539957141 2016-08-18 10:09:50,997 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(175): Attempting to copy table info for:ns3:test-14715399571412 2016-08-18 10:09:51,011 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742009_1185{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 296 2016-08-18 10:09:51,420 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540188034/ns3/test-14715399571412/.tabledesc/.tableinfo.0000000001 2016-08-18 10:09:51,421 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:09:51,421 INFO [ProcedureExecutor-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x19936382 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:51,425 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x199363820x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:51,427 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(188): Starting to write region info for table ns3:test-14715399571412 2016-08-18 10:09:51,427 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x19936382-0x1569e9d55410034 connected 2016-08-18 10:09:51,433 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742010_1186{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 50 2016-08-18 10:09:51,839 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(197): Finished writing region info for table ns3:test-14715399571412 2016-08-18 10:09:51,841 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(175): Attempting to copy table info for:ns2:test-14715399571411 2016-08-18 10:09:51,855 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742011_1187{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 295 2016-08-18 10:09:52,262 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540188034/ns2/test-14715399571411/.tabledesc/.tableinfo.0000000001 2016-08-18 10:09:52,263 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:09:52,263 INFO [ProcedureExecutor-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x19936382 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:52,267 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x199363820x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:52,269 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(188): Starting to write region info for table ns2:test-14715399571411 2016-08-18 10:09:52,269 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x19936382-0x1569e9d55410035 connected 2016-08-18 10:09:52,274 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:52,276 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742012_1188{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:09:52,276 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(197): Finished writing region info for table ns2:test-14715399571411 2016-08-18 10:09:52,278 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(175): Attempting to copy table info for:ns4:test-14715399571413 2016-08-18 10:09:52,288 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742013_1189{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 296 2016-08-18 10:09:52,330 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0003_000001 (auth:SIMPLE) 2016-08-18 10:09:52,696 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/backupUT/backup_1471540188034/ns4/test-14715399571413/.tabledesc/.tableinfo.0000000001 2016-08-18 10:09:52,696 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 10:09:52,697 INFO [ProcedureExecutor-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x19936382 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:09:52,701 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x199363820x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:09:52,702 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(188): Starting to write region info for table ns4:test-14715399571413 2016-08-18 10:09:52,702 DEBUG [ProcedureExecutor-1-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x19936382-0x1569e9d55410036 connected 2016-08-18 10:09:52,709 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742014_1190{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 50 2016-08-18 10:09:53,112 DEBUG [ProcedureExecutor-1] util.BackupServerUtil(197): Finished writing region info for table ns4:test-14715399571413 2016-08-18 10:09:53,113 INFO [ProcedureExecutor-1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410032 2016-08-18 10:09:53,114 DEBUG [ProcedureExecutor-1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:09:53,114 INFO [ProcedureExecutor-1] master.IncrementalTableBackupProcedure(125): Incremental copy is starting. 2016-08-18 10:09:53,115 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (1501050059) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:09:53,115 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60128 because read count=-1. Number of active connections: 12 2016-08-18 10:09:53,119 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(308): Doing COPY_TYPE_DISTCP 2016-08-18 10:09:53,139 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(318): DistCp options: [hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518, hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936, hdfs://localhost:59388/backupUT/backup_1471540188034/WALs] 2016-08-18 10:09:53,313 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742015_1191{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 4383 2016-08-18 10:09:53,738 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742016_1192{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:09:54,165 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742017_1193{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:09:54,593 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742018_1194{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 934 2016-08-18 10:09:55,018 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742019_1195{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1615 2016-08-18 10:09:55,442 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742020_1196{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1615 2016-08-18 10:09:56,278 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:09:56,300 INFO [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService$BackupDistCp(247): Progress: 100.0% subTask: 1.0 mapProgress: 1.0 2016-08-18 10:09:56,300 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471540188034 set status=RUNNING 2016-08-18 10:09:56,303 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:56,304 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(140): Backup progress data "100%" has been updated to hbase:backup for backup_1471540188034 2016-08-18 10:09:56,304 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService$BackupDistCp(256): Backup progress data updated to hbase:backup: "Progress: 100.0% - 8729 bytes copied." 2016-08-18 10:09:56,305 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService$BackupDistCp(271): DistCp job-id: job_local79986053_0006 completed: true true 2016-08-18 10:09:56,309 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService$BackupDistCp(274): Counters: 23 File System Counters FILE: Number of bytes read=168777202 FILE: Number of bytes written=245739176 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=149564449 HDFS: Number of bytes written=77011094 HDFS: Number of read operations=1908 HDFS: Number of large read operations=0 HDFS: Number of write operations=595 Map-Reduce Framework Map input records=6 Map output records=0 Input split bytes=264 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=1218445312 File Input Format Counters Bytes Read=1969 File Output Format Counters Bytes Written=0 org.apache.hadoop.tools.mapred.CopyMapper$Counter BYTESCOPIED=8729 BYTESEXPECTED=8729 COPY=6 2016-08-18 10:09:56,309 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(326): list of hdfs://localhost:59388/backupUT/backup_1471540188034/WALs for distcp 0 2016-08-18 10:09:56,312 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540194143; access_time=1471540193731; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:56,312 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540194570; access_time=1471540194155; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:56,312 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355; isDirectory=false; length=934; replication=1; blocksize=134217728; modification_time=1471540194999; access_time=1471540194585; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:56,312 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961; isDirectory=false; length=4383; replication=1; blocksize=134217728; modification_time=1471540193719; access_time=1471540193305; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:56,312 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518; isDirectory=false; length=1615; replication=1; blocksize=134217728; modification_time=1471540195422; access_time=1471540195011; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:56,312 DEBUG [ProcedureExecutor-1] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936; isDirectory=false; length=1615; replication=1; blocksize=134217728; modification_time=1471540195844; access_time=1471540195434; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:09:56,315 INFO [ProcedureExecutor-1] master.IncrementalTableBackupProcedure(176): Incremental copy from hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 to hdfs://localhost:59388/backupUT/backup_1471540188034/WALs finished. 2016-08-18 10:09:56,316 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(480): add WAL files to hbase:backup: backup_1471540188034 hdfs://localhost:59388/backupUT files [hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518,hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936] 2016-08-18 10:09:56,316 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961 2016-08-18 10:09:56,316 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:09:56,316 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:09:56,316 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:09:56,316 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:09:56,316 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(483): add :hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:09:56,318 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:56,424 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:59388/backupUT 2016-08-18 10:09:56,427 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(337): write RS log time stamps to hbase:backup for tables [ns1:test-1471539957141,ns3:test-14715399571412,ns2:test-14715399571411,ns4:test-14715399571413] 2016-08-18 10:09:56,429 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:56,430 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:59388/backupUT 2016-08-18 10:09:56,433 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(205): write backup start code to hbase:backup 1471540016518 2016-08-18 10:09:56,434 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:56,435 DEBUG [ProcedureExecutor-1] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:09:56,435 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:56,435 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:56,435 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:56,438 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:56,438 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540188034 INCREMENTAL 2016-08-18 10:09:56,442 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:56,443 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:56,443 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:56,446 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:56,446 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:56,449 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:56,449 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:56,449 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:56,449 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:56,449 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:56,449 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:56,455 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742021_1197{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1921 2016-08-18 10:09:56,861 INFO [ProcedureExecutor-1] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540188034/ns1/test-1471539957141/.backup.manifest 2016-08-18 10:09:56,862 DEBUG [ProcedureExecutor-1] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:09:56,862 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:56,862 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:56,862 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:56,867 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:56,867 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:56,871 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:56,871 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:56,871 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:56,871 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:56,872 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:56,872 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:56,872 DEBUG [ProcedureExecutor-1] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540188034 INCREMENTAL 2016-08-18 10:09:56,872 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:56,872 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:56,872 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:56,875 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:56,875 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:56,878 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:56,879 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:56,879 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:56,879 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:56,879 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:56,879 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:56,885 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742022_1198{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1924 2016-08-18 10:09:57,289 INFO [ProcedureExecutor-1] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540188034/ns3/test-14715399571412/.backup.manifest 2016-08-18 10:09:57,289 DEBUG [ProcedureExecutor-1] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:09:57,289 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:57,289 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:57,289 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:57,294 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:57,294 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540188034 INCREMENTAL 2016-08-18 10:09:57,298 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:57,299 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:57,299 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:57,302 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:57,302 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:57,305 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:57,305 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:57,306 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:57,306 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:57,306 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:57,306 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:57,312 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742023_1199{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1924 2016-08-18 10:09:57,721 INFO [ProcedureExecutor-1] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540188034/ns2/test-14715399571411/.backup.manifest 2016-08-18 10:09:57,722 DEBUG [ProcedureExecutor-1] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 10:09:57,722 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:57,722 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:57,722 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:57,726 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:57,726 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:57,730 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:57,730 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:57,730 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:57,730 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:57,731 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:57,731 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:57,731 DEBUG [ProcedureExecutor-1] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540188034 INCREMENTAL 2016-08-18 10:09:57,731 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:57,731 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:57,731 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:57,734 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:57,734 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:57,737 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:57,737 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:57,737 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:57,737 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:57,737 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:57,738 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:57,744 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742024_1200{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1924 2016-08-18 10:09:58,147 INFO [ProcedureExecutor-1] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540188034/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:09:58,147 DEBUG [ProcedureExecutor-1] impl.BackupManifest(455): 4 tables exist in table set. 2016-08-18 10:09:58,147 DEBUG [ProcedureExecutor-1] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471540188034 2016-08-18 10:09:58,147 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 10:09:58,147 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 10:09:58,151 DEBUG [ProcedureExecutor-1] impl.BackupManager(346): Current backup has an incremental backup ancestor, touching its image manifest in hdfs://localhost:59388/backupUT/backup_1471540016356/WALs to construct the dependency. 2016-08-18 10:09:58,151 DEBUG [ProcedureExecutor-1] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs 2016-08-18 10:09:58,155 DEBUG [ProcedureExecutor-1] impl.BackupManifest(409): load dependency for: backup_1471540016356 2016-08-18 10:09:58,155 DEBUG [ProcedureExecutor-1] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540016356/WALs/.backup.manifest 2016-08-18 10:09:58,155 DEBUG [ProcedureExecutor-1] impl.BackupManager(353): Last dependent incremental backup image information: 2016-08-18 10:09:58,155 DEBUG [ProcedureExecutor-1] impl.BackupManager(354): Token: backup_1471540016356 2016-08-18 10:09:58,155 DEBUG [ProcedureExecutor-1] impl.BackupManager(355): Backup directory: hdfs://localhost:59388/backupUT 2016-08-18 10:09:58,155 DEBUG [ProcedureExecutor-1] impl.BackupManager(359): Got 2 ancestors for the current backup. 2016-08-18 10:09:58,155 DEBUG [ProcedureExecutor-1] impl.BackupManifest(594): hdfs://localhost:59388/backupUT backup_1471540188034 INCREMENTAL 2016-08-18 10:09:58,162 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742025_1201{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1792 2016-08-18 10:09:58,564 INFO [ProcedureExecutor-1] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/.backup.manifest 2016-08-18 10:09:58,564 DEBUG [ProcedureExecutor-1] master.FullTableBackupProcedure(439): in-fly convert code here, provided by future jira 2016-08-18 10:09:58,564 DEBUG [ProcedureExecutor-1] master.FullTableBackupProcedure(447): Backup backup_1471540188034 finished: type=INCREMENTAL,tablelist=ns1:test-1471539957141;ns3:test-14715399571412;ns2:test-14715399571411;ns4:test-14715399571413,targetRootDir=hdfs://localhost:59388/backupUT,startts=1471540188150,completets=1471540196435,bytescopied=0 2016-08-18 10:09:58,564 DEBUG [ProcedureExecutor-1] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471540188034 set status=COMPLETE 2016-08-18 10:09:58,565 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:09:58,567 INFO [ProcedureExecutor-1] master.FullTableBackupProcedure(462): Backup backup_1471540188034 completed. 2016-08-18 10:09:58,675 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:593960000000003 2016-08-18 10:09:58,676 DEBUG [ProcedureExecutor-1] procedure2.ProcedureExecutor(870): Procedure completed in 10.5270sec: IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:59388/backupUT; backupId=backup_1471540188034; tables=ns3:test-14715399571412,ns4:test-14715399571413,ns1:test-1471539957141,ns2:test-14715399571411) id=25 state=FINISHED 2016-08-18 10:10:06,282 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=25 2016-08-18 10:10:06,284 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471540188034/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:10:06,287 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471540188034 2016-08-18 10:10:06,288 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471540188034/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:10:06,288 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1c46d6fd connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:10:06,293 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1c46d6fd0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:10:06,294 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c2cb5af, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:10:06,294 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:10:06,294 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:10:06,295 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1c46d6fd-0x1569e9d55410037 connected 2016-08-18 10:10:06,297 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:10:06,297 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60179; # active connections: 12 2016-08-18 10:10:06,298 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:06,298 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60179 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:06,300 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410037 2016-08-18 10:10:06,301 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:10:06,302 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 10:10:06,302 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (1382125511) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:10:06,302 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60179 because read count=-1. Number of active connections: 12 2016-08-18 10:10:06,303 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:10:06,305 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471539967737 2016-08-18 10:10:06,306 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471539967737/ns4/test-14715399571413/.backup.manifest 2016-08-18 10:10:06,306 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns4:test-14715399571413' to 'ns4:table4_restore' from full backup image hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413 2016-08-18 10:10:06,311 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/archive/data/ns4/test-14715399571413 does not exists 2016-08-18 10:10:06,311 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns4:test-14715399571413, will only create table 2016-08-18 10:10:06,312 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x71933426 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:10:06,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x719334260x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:10:06,315 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7297aa55, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:10:06,315 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:10:06,315 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:10:06,316 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x71933426-0x1569e9d55410038 connected 2016-08-18 10:10:06,321 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:10:06,321 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60183; # active connections: 12 2016-08-18 10:10:06,322 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:06,322 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60183 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:06,323 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns4:table4_restore', preserving region splits 2016-08-18 10:10:06,324 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:10:06,324 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60184; # active connections: 13 2016-08-18 10:10:06,325 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:06,325 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60184 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:06,326 INFO [main] client.HBaseAdmin$10(780): Started disable of ns4:table4_restore 2016-08-18 10:10:06,326 INFO [B.defaultRpcServer.handler=2,queue=0,port=59396] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns4:table4_restore 2016-08-18 10:10:06,430 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns4:table4_restore) id=26 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-18 10:10:06,432 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=26 2016-08-18 10:10:06,433 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:table4_restore/write-master:593960000000001 2016-08-18 10:10:06,538 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=26 2016-08-18 10:10:06,644 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540206643,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:table4_restore"} 2016-08-18 10:10:06,645 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:06,646 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns4:table4_restore state to DISABLING in META 2016-08-18 10:10:06,745 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=26 2016-08-18 10:10:06,752 INFO [ProcedureExecutor-0] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-18 10:10:06,753 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. (offlining), current state: {1b9df2550cafc7710dd1c6ec60242385 state=OPEN, ts=1471540043793, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:10:06,754 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {1b9df2550cafc7710dd1c6ec60242385 state=OPEN, ts=1471540043793, server=10.22.9.171,59399,1471539932874} to {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_CLOSE, ts=1471540206754, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:10:06,754 INFO [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. with state=PENDING_CLOSE 2016-08-18 10:10:06,754 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:06,755 INFO [PriorityRpcServer.handler=1,queue=1,port=59399] regionserver.RSRpcServices(1314): Close 1b9df2550cafc7710dd1c6ec60242385, moving to null 2016-08-18 10:10:06,756 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(90): Processing close of ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:06,756 DEBUG [10.22.9.171,59396,1471539932179-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,59399,1471539932874 for region ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:06,756 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.: disabling compactions & flushes 2016-08-18 10:10:06,756 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:06,756 INFO [StoreCloserThread-ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.-1] regionserver.HStore(839): Closed f 2016-08-18 10:10:06,757 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:10:06,763 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 10:10:06,764 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:06,764 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=59396] master.AssignmentManager(2884): Got transition CLOSED for {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_CLOSE, ts=1471540206754, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:10:06,765 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(1106): Transition {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_CLOSE, ts=1471540206754, server=10.22.9.171,59399,1471539932874} to {1b9df2550cafc7710dd1c6ec60242385 state=OFFLINE, ts=1471540206765, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:10:06,765 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. with state=OFFLINE 2016-08-18 10:10:06,765 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:06,766 INFO [B.defaultRpcServer.handler=3,queue=0,port=59396] master.RegionStates(590): Offlined 1b9df2550cafc7710dd1c6ec60242385 from 10.22.9.171,59399,1471539932874 2016-08-18 10:10:06,767 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(122): Closed ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:06,914 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540206914,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:table4_restore"} 2016-08-18 10:10:06,915 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:06,917 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns4:table4_restore state to DISABLED in META 2016-08-18 10:10:06,917 INFO [ProcedureExecutor-0] procedure.DisableTableProcedure(424): Disabled table, ns4:table4_restore, is completed. 2016-08-18 10:10:07,052 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=26 2016-08-18 10:10:07,134 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:table4_restore/write-master:593960000000001 2016-08-18 10:10:07,134 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 698msec: DisableTableProcedure (table=ns4:table4_restore) id=26 owner=tyu state=FINISHED 2016-08-18 10:10:07,559 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=26 2016-08-18 10:10:07,559 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns4:table4_restore completed 2016-08-18 10:10:07,561 INFO [main] client.HBaseAdmin$8(615): Started truncating ns4:table4_restore 2016-08-18 10:10:07,561 INFO [B.defaultRpcServer.handler=1,queue=0,port=59396] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns4:table4_restore 2016-08-18 10:10:07,669 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59396] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns4:table4_restore preserveSplits=true) id=27 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 10:10:07,673 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:table4_restore/write-master:593960000000002 2016-08-18 10:10:07,674 DEBUG [ProcedureExecutor-2] procedure.TruncateTableProcedure(87): waiting for 'ns4:table4_restore' regions in transition 2016-08-18 10:10:07,781 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471540207780,"tag":[],"qualifier":"","vlen":0}]},"row":"ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385."} 2016-08-18 10:10:07,782 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:07,783 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 1b9df2550cafc7710dd1c6ec60242385, NAME => 'ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.', STARTKEY => '', ENDKEY => ''}] 2016-08-18 10:10:07,785 DEBUG [ProcedureExecutor-2] procedure.DeleteTableProcedure(408): Removing 'ns4:table4_restore' from region states. 2016-08-18 10:10:07,786 DEBUG [ProcedureExecutor-2] procedure.DeleteTableProcedure(412): Marking 'ns4:table4_restore' as deleted. 2016-08-18 10:10:07,786 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471540207786,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns4:table4_restore"} 2016-08-18 10:10:07,787 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:07,788 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1726): Deleted table ns4:table4_restore state from META 2016-08-18 10:10:07,897 DEBUG [ProcedureExecutor-2] procedure.DeleteTableProcedure(340): Archiving region ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. from FS 2016-08-18 10:10:07,897 DEBUG [ProcedureExecutor-2] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385 2016-08-18 10:10:07,900 DEBUG [ProcedureExecutor-2] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/recovered.edits] 2016-08-18 10:10:07,907 DEBUG [ProcedureExecutor-2] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/recovered.edits/4.seqid, to hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/archive/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/recovered.edits/4.seqid 2016-08-18 10:10:07,908 INFO [IPC Server handler 8 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741921_1097 127.0.0.1:59389 2016-08-18 10:10:07,909 DEBUG [ProcedureExecutor-2] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385 2016-08-18 10:10:07,909 DEBUG [ProcedureExecutor-2] procedure.DeleteTableProcedure(344): Table 'ns4:table4_restore' archived! 2016-08-18 10:10:07,910 INFO [IPC Server handler 0 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741920_1096 127.0.0.1:59389 2016-08-18 10:10:08,029 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742026_1202{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 291 2016-08-18 10:10:08,434 DEBUG [ProcedureExecutor-2] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp/data/ns4/table4_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 10:10:08,435 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(6162): creating HRegion ns4:table4_restore HTD == 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/.tmp Table name == ns4:table4_restore 2016-08-18 10:10:08,444 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742027_1203{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 45 2016-08-18 10:10:08,851 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:08,852 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.: disabling compactions & flushes 2016-08-18 10:10:08,852 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:08,853 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:08,965 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385."} 2016-08-18 10:10:08,967 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:08,968 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 10:10:09,073 INFO [ProcedureExecutor-2] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,59399,1471539932874 2016-08-18 10:10:09,074 ERROR [ProcedureExecutor-2] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 10:10:09,075 INFO [ProcedureExecutor-2] master.RegionStates(1106): Transition {1b9df2550cafc7710dd1c6ec60242385 state=OFFLINE, ts=1471540209073, server=null} to {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_OPEN, ts=1471540209075, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:10:09,075 INFO [ProcedureExecutor-2] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. with state=PENDING_OPEN, sn=10.22.9.171,59399,1471539932874 2016-08-18 10:10:09,076 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:09,077 INFO [PriorityRpcServer.handler=3,queue=1,port=59399] regionserver.RSRpcServices(1666): Open ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:09,082 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(6339): Opening region: {ENCODED => 1b9df2550cafc7710dd1c6ec60242385, NAME => 'ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.', STARTKEY => '', ENDKEY => ''} 2016-08-18 10:10:09,082 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table4_restore 1b9df2550cafc7710dd1c6ec60242385 2016-08-18 10:10:09,083 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:09,085 INFO [StoreOpener-1b9df2550cafc7710dd1c6ec60242385-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=8, currentSize=1108056, freeSize=1042854248, maxSize=1043962304, heapSize=1108056, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 10:10:09,086 INFO [StoreOpener-1b9df2550cafc7710dd1c6ec60242385-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 10:10:09,086 DEBUG [StoreOpener-1b9df2550cafc7710dd1c6ec60242385-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/f 2016-08-18 10:10:09,087 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385 2016-08-18 10:10:09,091 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 10:10:09,091 INFO [RS_OPEN_REGION-10.22.9.171:59399-0] regionserver.HRegion(871): Onlined 1b9df2550cafc7710dd1c6ec60242385; next sequenceid=2 2016-08-18 10:10:09,091 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:10:09,092 INFO [PostOpenDeployTasks:1b9df2550cafc7710dd1c6ec60242385] regionserver.HRegionServer(1952): Post open deploy tasks for ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:09,092 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.AssignmentManager(2884): Got transition OPENED for {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_OPEN, ts=1471540209075, server=10.22.9.171,59399,1471539932874} from 10.22.9.171,59399,1471539932874 2016-08-18 10:10:09,092 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStates(1106): Transition {1b9df2550cafc7710dd1c6ec60242385 state=PENDING_OPEN, ts=1471540209075, server=10.22.9.171,59399,1471539932874} to {1b9df2550cafc7710dd1c6ec60242385 state=OPEN, ts=1471540209092, server=10.22.9.171,59399,1471539932874} 2016-08-18 10:10:09,092 INFO [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. with state=OPEN, openSeqNum=2, server=10.22.9.171,59399,1471539932874 2016-08-18 10:10:09,093 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:09,093 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396] master.RegionStates(452): Onlined 1b9df2550cafc7710dd1c6ec60242385 on 10.22.9.171,59399,1471539932874 2016-08-18 10:10:09,093 DEBUG [ProcedureExecutor-2] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,59399,1471539932874 2016-08-18 10:10:09,093 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471540209093,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:table4_restore"} 2016-08-18 10:10:09,093 ERROR [B.defaultRpcServer.handler=0,queue=0,port=59396] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 10:10:09,094 DEBUG [PostOpenDeployTasks:1b9df2550cafc7710dd1c6ec60242385] regionserver.HRegionServer(1979): Finished post open deploy task for ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:10:09,094 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:10:09,095 DEBUG [RS_OPEN_REGION-10.22.9.171:59399-0] handler.OpenRegionHandler(126): Opened ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. on 10.22.9.171,59399,1471539932874 2016-08-18 10:10:09,095 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns4:table4_restore state to ENABLED in META 2016-08-18 10:10:09,204 DEBUG [ProcedureExecutor-2] procedure.TruncateTableProcedure(129): truncate 'ns4:table4_restore' completed 2016-08-18 10:10:09,315 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:table4_restore/write-master:593960000000002 2016-08-18 10:10:09,315 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 1.6430sec: TruncateTableProcedure (table=ns4:table4_restore preserveSplits=true) id=27 owner=tyu state=FINISHED 2016-08-18 10:10:09,439 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396] master.MasterRpcServices(974): Checking to see if procedure is done procId=27 2016-08-18 10:10:09,439 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns4:table4_restore completed 2016-08-18 10:10:09,439 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:10:09,440 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410038 2016-08-18 10:10:09,442 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:10:09,447 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns4:test-14715399571413' to 'ns4:table4_restore' from log dirs: hdfs://localhost:59388/backupUT/backup_1471540016356/WALs,hdfs://localhost:59388/backupUT/backup_1471540188034/WALs 2016-08-18 10:10:09,448 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (2083360233) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:10:09,448 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60184 because read count=-1. Number of active connections: 13 2016-08-18 10:10:09,448 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60183 because read count=-1. Number of active connections: 13 2016-08-18 10:10:09,448 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (938896049) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:10:09,448 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5cae64a9 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:10:09,451 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5cae64a90x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:10:09,451 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bebcef8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:10:09,451 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:10:09,452 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:10:09,452 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5cae64a9-0x1569e9d55410039 connected 2016-08-18 10:10:09,454 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:10:09,454 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60189; # active connections: 12 2016-08-18 10:10:09,454 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:09,455 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60189 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:09,455 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:59388/backupUT/backup_1471540016356/WALs,hdfs://localhost:59388/backupUT/backup_1471540188034/WALs from hbase tables ,ns4:test-14715399571413 to tables ,ns4:table4_restore 2016-08-18 10:10:09,456 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns4:test-14715399571413 into ns4:table4_restore 2016-08-18 10:10:09,457 DEBUG [main] mapreduce.WALPlayer(307): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns4-table4_restore-1471540209456 from hdfs://localhost:59388/backupUT/backup_1471540016356/WALs,hdfs://localhost:59388/backupUT/backup_1471540188034/WALs 2016-08-18 10:10:09,457 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5523a13e connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:10:09,459 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5523a13e0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:10:09,460 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1788b011, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:10:09,460 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:10:09,460 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:10:09,461 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5523a13e-0x1569e9d5541003a connected 2016-08-18 10:10:09,462 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:10:09,462 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60191; # active connections: 13 2016-08-18 10:10:09,463 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:09,463 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60191 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:09,464 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-18 10:10:09,465 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns4:test-14715399571413 2016-08-18 10:10:09,467 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:10:09,467 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60192; # active connections: 14 2016-08-18 10:10:09,468 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:09,468 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60192 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:09,471 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-18 10:10:09,471 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_0aa34e92-cc29-4300-a86b-51c1656fc561 2016-08-18 10:10:09,477 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742028_1204{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 153 2016-08-18 10:10:09,882 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 10:10:10,215 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073741920_1096, blk_1073741921_1097] 2016-08-18 10:10:10,758 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-7618186395965368472.jar 2016-08-18 10:10:12,274 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:10:20,462 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-245586698297124451.jar 2016-08-18 10:10:22,625 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-8783836061009600276.jar 2016-08-18 10:10:22,668 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-8237492693169174296.jar 2016-08-18 10:10:29,624 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-2104378693500521204.jar 2016-08-18 10:10:29,625 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 10:10:29,625 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 10:10:29,625 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 10:10:29,626 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:10:29,626 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 10:10:29,626 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 10:10:29,836 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-4027503496210383701.jar 2016-08-18 10:10:29,837 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-4027503496210383701.jar 2016-08-18 10:10:31,039 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-787452009531099345.jar 2016-08-18 10:10:31,040 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-4027503496210383701.jar 2016-08-18 10:10:31,040 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-4027503496210383701.jar 2016-08-18 10:10:31,041 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/hadoop-787452009531099345.jar 2016-08-18 10:10:31,041 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-18 10:10:31,041 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns4:test-14715399571413 output configured. 2016-08-18 10:10:31,041 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:10:31,041 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541003a 2016-08-18 10:10:31,042 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:10:31,043 DEBUG [main] mapreduce.WALPlayer(324): success configuring load incremental job 2016-08-18 10:10:31,043 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60191 because read count=-1. Number of active connections: 14 2016-08-18 10:10:31,043 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (1828373196) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:10:31,043 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60192 because read count=-1. Number of active connections: 14 2016-08-18 10:10:31,043 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-872549534) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:10:31,044 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 10:10:31,497 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742029_1205{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1556922 2016-08-18 10:10:31,914 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742030_1206{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:10:31,932 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742031_1207{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 4516740 2016-08-18 10:10:32,288 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache(877): totalSize=1.06 MB, freeSize=994.54 MB, max=995.60 MB, blockCount=8, accesses=8, hits=0, hitRatio=0, cachingAccesses=8, cachingHits=0, cachingHitsRatio=0,evictions=29, evicted=0, evictedPerRun=0.0 2016-08-18 10:10:32,346 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742032_1208{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 112558 2016-08-18 10:10:32,771 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742033_1209{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:10:32,779 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742034_1210{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 662657 2016-08-18 10:10:33,192 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742035_1211{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 38156 2016-08-18 10:10:33,610 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742036_1212{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1475955 2016-08-18 10:10:34,029 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742037_1213{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:10:34,038 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742038_1214{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:10:34,048 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742039_1215{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:10:34,067 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742040_1216{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 4669607 2016-08-18 10:10:34,481 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742041_1217{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 792964 2016-08-18 10:10:34,494 DEBUG [10.22.9.171,59399,1471539932874_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:10:34,899 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742042_1218{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 1795932 2016-08-18 10:10:34,934 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1b90973 connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:10:34,937 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1b909730x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:10:34,937 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d15df82, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:10:34,938 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:10:34,938 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:10:34,938 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 10:10:34,938 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1b90973-0x1569e9d5541003b connected 2016-08-18 10:10:34,940 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:10:34,940 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60236; # active connections: 13 2016-08-18 10:10:34,941 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:34,941 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60236 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:34,947 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:10:34,947 DEBUG [RpcServer.listener,port=59399] ipc.RpcServer$Listener(880): RpcServer.listener,port=59399: connection from 10.22.9.171:60237; # active connections: 8 2016-08-18 10:10:34,948 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:10:34,948 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60237 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:10:34,951 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:10:34,952 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539936418 2016-08-18 10:10:34,952 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:10:34,953 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108 2016-08-18 10:10:34,953 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:10:34,954 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518 2016-08-18 10:10:34,954 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:10:34,955 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533 2016-08-18 10:10:34,955 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:10:34,956 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540016935 2016-08-18 10:10:34,956 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:10:34,957 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539936418 2016-08-18 10:10:34,957 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:10:34,958 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543 2016-08-18 10:10:34,958 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:10:34,959 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355 2016-08-18 10:10:34,959 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:10:34,960 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721 2016-08-18 10:10:34,960 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:10:34,961 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518 2016-08-18 10:10:34,961 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:10:34,962 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152 2016-08-18 10:10:34,962 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:10:34,963 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936 2016-08-18 10:10:34,963 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541003b 2016-08-18 10:10:34,964 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:10:34,965 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60236 because read count=-1. Number of active connections: 13 2016-08-18 10:10:34,965 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:60237 because read count=-1. Number of active connections: 8 2016-08-18 10:10:34,965 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (1651779532) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:10:34,965 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (520686017) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:10:34,967 DEBUG [10.22.9.171,59396,1471539932179_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:10:35,306 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 10:10:35,325 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:59388/backupUT/backup_1471540016356/WALs for WAL files 2016-08-18 10:10:35,328 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 10:10:35,328 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471539968108; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024240; access_time=1471540023826; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,328 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539937974; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471540022532; access_time=1471540022117; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,328 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471539968543; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540024666; access_time=1471540024253; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,328 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539940130; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471540022966; access_time=1471540022551; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,328 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539960721; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471540025094; access_time=1471540024679; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,328 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471539968108; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471540023391; access_time=1471540022979; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,328 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539962152; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471540025521; access_time=1471540025107; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,329 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540016356/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471539968528; isDirectory=false; length=1196; replication=1; blocksize=134217728; modification_time=1471540023814; access_time=1471540023404; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,329 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:59388/backupUT/backup_1471540188034/WALs for WAL files 2016-08-18 10:10:35,331 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 10:10:35,331 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540016518; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540194143; access_time=1471540193731; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,331 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471539968533; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471540194570; access_time=1471540194155; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,331 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540017355; isDirectory=false; length=934; replication=1; blocksize=134217728; modification_time=1471540194999; access_time=1471540194585; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,331 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471539968961; isDirectory=false; length=4383; replication=1; blocksize=134217728; modification_time=1471540193719; access_time=1471540193305; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,332 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540016518; isDirectory=false; length=1615; replication=1; blocksize=134217728; modification_time=1471540195422; access_time=1471540195011; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,332 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:59388/backupUT/backup_1471540188034/WALs/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540016936; isDirectory=false; length=1615; replication=1; blocksize=134217728; modification_time=1471540195844; access_time=1471540195434; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 10:10:35,339 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742043_1219{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 2877 2016-08-18 10:10:35,750 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742044_1220{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 97 2016-08-18 10:10:36,171 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742045_1221{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 134735 2016-08-18 10:10:36,612 WARN [ResourceManager Event Processor] capacity.LeafQueue(632): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:10:36,612 WARN [ResourceManager Event Processor] capacity.LeafQueue(653): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 10:10:37,188 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:37,235 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because table has an old edit so flush to free WALs after random delay 127269ms 2016-08-18 10:10:38,251 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 167211ms 2016-08-18 10:10:39,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 277736ms 2016-08-18 10:10:39,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 25850ms 2016-08-18 10:10:40,241 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 109096ms 2016-08-18 10:10:40,241 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 276797ms 2016-08-18 10:10:40,292 DEBUG [10.22.9.171,59441,1471539940207_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:10:40,604 DEBUG [region-location-4] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info 2016-08-18 10:10:40,604 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/meta 2016-08-18 10:10:40,604 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info 2016-08-18 10:10:40,605 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/session 2016-08-18 10:10:40,605 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table 2016-08-18 10:10:40,605 DEBUG [10.22.9.171,59437,1471539940144_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 10:10:41,235 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 33280ms 2016-08-18 10:10:41,235 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 192905ms 2016-08-18 10:10:41,320 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because table has an old edit so flush to free WALs after random delay 37769ms 2016-08-18 10:10:41,429 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:10:42,180 INFO [Socket Reader #1 for port 59477] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:42,318 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 161010ms 2016-08-18 10:10:42,318 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 216339ms 2016-08-18 10:10:42,336 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 11119ms 2016-08-18 10:10:42,336 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 160249ms 2016-08-18 10:10:42,437 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742046_1222{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:10:43,295 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 40580ms 2016-08-18 10:10:43,295 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 141794ms 2016-08-18 10:10:43,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 201675ms 2016-08-18 10:10:43,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 236512ms 2016-08-18 10:10:44,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 198598ms 2016-08-18 10:10:44,235 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 47879ms 2016-08-18 10:10:44,323 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 253558ms 2016-08-18 10:10:44,323 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 268877ms 2016-08-18 10:10:44,415 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:44,415 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:45,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 37277ms 2016-08-18 10:10:45,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 126349ms 2016-08-18 10:10:45,271 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:45,271 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:45,323 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 211515ms 2016-08-18 10:10:45,323 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 181569ms 2016-08-18 10:10:46,246 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 224104ms 2016-08-18 10:10:46,246 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 279691ms 2016-08-18 10:10:46,278 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:46,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 232494ms 2016-08-18 10:10:46,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 130762ms 2016-08-18 10:10:47,278 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 142991ms 2016-08-18 10:10:47,278 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 107390ms 2016-08-18 10:10:47,293 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:47,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 17693ms 2016-08-18 10:10:47,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 252425ms 2016-08-18 10:10:48,298 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 120498ms 2016-08-18 10:10:48,299 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 19007ms 2016-08-18 10:10:48,321 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 158246ms 2016-08-18 10:10:48,321 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 28673ms 2016-08-18 10:10:49,124 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:49,145 WARN [ContainersLauncher #4] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000003 is : 143 2016-08-18 10:10:49,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 257899ms 2016-08-18 10:10:49,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 43039ms 2016-08-18 10:10:49,312 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:49,321 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 284925ms 2016-08-18 10:10:49,321 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 251861ms 2016-08-18 10:10:50,320 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 42385ms 2016-08-18 10:10:50,320 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 296341ms 2016-08-18 10:10:50,393 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 24230ms 2016-08-18 10:10:50,393 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 265261ms 2016-08-18 10:10:50,959 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:50,985 WARN [ContainersLauncher #5] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000005 is : 143 2016-08-18 10:10:51,054 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:51,076 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000002 is : 143 2016-08-18 10:10:51,250 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:51,262 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 198755ms 2016-08-18 10:10:51,262 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 215239ms 2016-08-18 10:10:51,273 WARN [ContainersLauncher #4] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000004 is : 143 2016-08-18 10:10:51,317 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 142000ms 2016-08-18 10:10:51,318 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 240236ms 2016-08-18 10:10:51,326 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:51,327 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:51,327 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:52,235 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 143924ms 2016-08-18 10:10:52,236 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 29032ms 2016-08-18 10:10:52,417 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 30220ms 2016-08-18 10:10:52,418 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 263805ms 2016-08-18 10:10:52,712 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:52,734 WARN [ContainersLauncher #5] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000006 is : 143 2016-08-18 10:10:53,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 270835ms 2016-08-18 10:10:53,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 114107ms 2016-08-18 10:10:53,345 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:53,384 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 201545ms 2016-08-18 10:10:53,384 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 104172ms 2016-08-18 10:10:54,197 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:54,220 WARN [ContainersLauncher #6] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000007 is : 143 2016-08-18 10:10:54,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 42901ms 2016-08-18 10:10:54,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 230869ms 2016-08-18 10:10:54,319 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 35086ms 2016-08-18 10:10:54,320 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 131537ms 2016-08-18 10:10:54,341 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:55,310 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 74337ms 2016-08-18 10:10:55,311 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 176481ms 2016-08-18 10:10:55,319 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 270931ms 2016-08-18 10:10:55,320 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 156807ms 2016-08-18 10:10:56,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 162712ms 2016-08-18 10:10:56,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 46813ms 2016-08-18 10:10:56,367 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 185500ms 2016-08-18 10:10:56,367 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 194874ms 2016-08-18 10:10:56,517 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:56,540 WARN [ContainersLauncher #4] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000008 is : 143 2016-08-18 10:10:57,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 14176ms 2016-08-18 10:10:57,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 80353ms 2016-08-18 10:10:57,319 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 23090ms 2016-08-18 10:10:57,320 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 27231ms 2016-08-18 10:10:57,356 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:58,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 186027ms 2016-08-18 10:10:58,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 136672ms 2016-08-18 10:10:58,323 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 57600ms 2016-08-18 10:10:58,323 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 128285ms 2016-08-18 10:10:58,504 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:58,528 WARN [ContainersLauncher #4] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000011 is : 143 2016-08-18 10:10:58,665 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:58,692 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000010 is : 143 2016-08-18 10:10:58,762 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:58,786 WARN [ContainersLauncher #5] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000009 is : 143 2016-08-18 10:10:59,290 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 90856ms 2016-08-18 10:10:59,290 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 198630ms 2016-08-18 10:10:59,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 169543ms 2016-08-18 10:10:59,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 179177ms 2016-08-18 10:10:59,369 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:10:59,370 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:11:00,260 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 2918ms 2016-08-18 10:11:00,261 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 284962ms 2016-08-18 10:11:00,340 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 170558ms 2016-08-18 10:11:00,341 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 135089ms 2016-08-18 10:11:00,837 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:11:00,855 WARN [ContainersLauncher #6] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000013 is : 143 2016-08-18 10:11:01,244 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 227840ms 2016-08-18 10:11:01,245 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 180830ms 2016-08-18 10:11:01,333 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 65034ms 2016-08-18 10:11:01,334 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 137582ms 2016-08-18 10:11:02,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 224330ms 2016-08-18 10:11:02,234 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 123721ms 2016-08-18 10:11:02,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 245430ms 2016-08-18 10:11:02,316 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 65628ms 2016-08-18 10:11:02,682 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:11:02,702 WARN [ContainersLauncher #4] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000014 is : 143 2016-08-18 10:11:03,260 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 65175ms 2016-08-18 10:11:03,260 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 341ms 2016-08-18 10:11:03,347 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 113807ms 2016-08-18 10:11:03,348 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 40357ms 2016-08-18 10:11:03,578 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:11:03,591 WARN [ContainersLauncher #4] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000016 is : 143 2016-08-18 10:11:03,617 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:11:03,631 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000015 is : 143 2016-08-18 10:11:04,070 INFO [Socket Reader #1 for port 59485] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:11:04,083 WARN [ContainersLauncher #5] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471539956090_0004_01_000012 is : 143 2016-08-18 10:11:04,112 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742047_1223{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 17532 2016-08-18 10:11:04,120 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742048_1224{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:11:04,142 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742049_1225{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|FINALIZED]]} size 0 2016-08-18 10:11:04,159 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742050_1226{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:11:04,320 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 67172ms 2016-08-18 10:11:04,320 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 76476ms 2016-08-18 10:11:04,333 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 271295ms 2016-08-18 10:11:04,334 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 246137ms 2016-08-18 10:11:04,804 DEBUG [ProcedureExecutorTimeoutThread] procedure2.ProcedureExecutor$CompletedProcedureCleaner(178): Evict completed procedure 6 2016-08-18 10:11:04,913 DEBUG [ProcedureExecutorTimeoutThread] procedure2.ProcedureExecutor$CompletedProcedureCleaner(178): Evict completed procedure 5 2016-08-18 10:11:05,020 DEBUG [ProcedureExecutorTimeoutThread] procedure2.ProcedureExecutor$CompletedProcedureCleaner(178): Evict completed procedure 7 2016-08-18 10:11:05,122 DEBUG [ProcedureExecutorTimeoutThread] procedure2.ProcedureExecutor$CompletedProcedureCleaner(178): Evict completed procedure 8 2016-08-18 10:11:05,180 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742043_1219 127.0.0.1:59389 2016-08-18 10:11:05,180 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742044_1220 127.0.0.1:59389 2016-08-18 10:11:05,180 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742045_1221 127.0.0.1:59389 2016-08-18 10:11:05,180 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742047_1223 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742046_1222 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742042_1218 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742033_1209 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742040_1216 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742038_1214 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742034_1210 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742031_1207 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742035_1211 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742039_1215 127.0.0.1:59389 2016-08-18 10:11:05,181 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742029_1205 127.0.0.1:59389 2016-08-18 10:11:05,182 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742036_1212 127.0.0.1:59389 2016-08-18 10:11:05,182 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742032_1208 127.0.0.1:59389 2016-08-18 10:11:05,182 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742037_1213 127.0.0.1:59389 2016-08-18 10:11:05,182 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742030_1206 127.0.0.1:59389 2016-08-18 10:11:05,182 INFO [IPC Server handler 1 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073742041_1217 127.0.0.1:59389 2016-08-18 10:11:05,224 DEBUG [ProcedureExecutorTimeoutThread] procedure2.ProcedureExecutor$CompletedProcedureCleaner(178): Evict completed procedure 9 2016-08-18 10:11:05,235 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 31588ms 2016-08-18 10:11:05,235 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 221309ms 2016-08-18 10:11:05,317 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 145668ms 2016-08-18 10:11:05,317 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59437,1471539940144-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. because info has an old edit so flush to free WALs after random delay 248858ms 2016-08-18 10:11:05,329 DEBUG [ProcedureExecutorTimeoutThread] procedure2.ProcedureExecutor$CompletedProcedureCleaner(178): Evict completed procedure 10 2016-08-18 10:11:05,913 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns4-table4_restore-1471540209456 2016-08-18 10:11:05,914 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7334926e connecting to ZooKeeper ensemble=localhost:49480 2016-08-18 10:11:05,919 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7334926e0x0, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 10:11:05,920 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6dcab7be, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 10:11:05,920 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 10:11:05,920 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 10:11:05,921 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7334926e-0x1569e9d5541003c connected 2016-08-18 10:11:05,922 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 10:11:05,923 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60476; # active connections: 13 2016-08-18 10:11:05,923 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:11:05,924 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60476 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:11:05,929 DEBUG [main] client.ConnectionImplementation(604): Table ns4:table4_restore should be available 2016-08-18 10:11:05,931 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:59388/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns4-table4_restore-1471540209456/_SUCCESS 2016-08-18 10:11:05,932 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 10:11:05,932 DEBUG [RpcServer.listener,port=59396] ipc.RpcServer$Listener(880): RpcServer.listener,port=59396: connection from 10.22.9.171:60477; # active connections: 14 2016-08-18 10:11:05,933 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 10:11:05,933 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 60477 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 10:05:13 PDT 2016" src_checksum: "6c3997ee928f07492587eabb35ff18d8" version_major: 2 version_minor: 0 2016-08-18 10:11:05,934 WARN [main] mapreduce.LoadIncrementalHFiles(350): Bulk load operation did not find any files to load in directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns4-table4_restore-1471540209456. Does it contain files in subdirectories that correspond to column family names? 2016-08-18 10:11:05,935 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:11:05,935 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541003c 2016-08-18 10:11:05,935 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:05,936 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-18 10:11:05,936 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410039 2016-08-18 10:11:05,936 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-1195180955) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:11:05,936 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (323239883) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:11:05,936 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60476 because read count=-1. Number of active connections: 14 2016-08-18 10:11:05,936 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60477 because read count=-1. Number of active connections: 14 2016-08-18 10:11:05,937 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:05,938 INFO [main] impl.RestoreClientImpl(292): ns4:test-14715399571413 has been successfully restored to ns4:table4_restore 2016-08-18 10:11:05,938 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 10:11:05,938 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471539967737 hdfs://localhost:59388/backupUT/backup_1471539967737/ns4/test-14715399571413/ 2016-08-18 10:11:05,938 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471540016356 hdfs://localhost:59388/backupUT/backup_1471540016356/ns4/test-14715399571413/ 2016-08-18 10:11:05,938 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471540188034 hdfs://localhost:59388/backupUT/backup_1471540188034/ns4/test-14715399571413/ 2016-08-18 10:11:05,938 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-18 10:11:05,938 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-1947113330) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:11:05,938 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:60189 because read count=-1. Number of active connections: 12 2016-08-18 10:11:05,938 INFO [main] impl.RestoreClientImpl(108): Restore for [ns4:test-14715399571413] are successful! 2016-08-18 10:11:05,942 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:11:05,942 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541000d 2016-08-18 10:11:05,943 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:05,943 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (399357622) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:11:05,943 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59511 because read count=-1. Number of active connections: 11 2016-08-18 10:11:05,943 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (-229586878) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:11:05,943 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (1498392022) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:11:05,943 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59510 because read count=-1. Number of active connections: 11 2016-08-18 10:11:05,943 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59580 because read count=-1. Number of active connections: 7 2016-08-18 10:11:05,992 INFO [main] hbase.ResourceChecker(172): after: backup.TestIncrementalBackup#TestIncBackupRestore Thread=888 (was 792) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 block BP-1865151160-10.22.9.171-1471539927174:blk_1073742003_1179 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59437-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t14 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59441-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540189032 block BP-1865151160-10.22.9.171-1471539927174:blk_1073742005_1181 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/dfscluster_2d76f3f4-9dc4-4950-aa90-aebb405cacf6/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59437-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/dfscluster_2d76f3f4-9dc4-4950-aa90-aebb405cacf6/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,59396,1471539932179)-backup-pool32-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:59399-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59437-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,59399,1471539932874)-backup-pool30-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59437-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 block BP-1865151160-10.22.9.171-1471539927174:blk_1073742006_1182 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59441-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,59399,1471539932874)-backup-pool20-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1865151160-10.22.9.171-1471539927174:blk_1073742006_1182 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59396-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,59399,1471539932874' subprocedure-pool5-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:59399-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59437-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #2 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.read1(BufferedReader.java:205) java.io.BufferedReader.read(BufferedReader.java:279) org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:786) org.apache.hadoop.util.Shell.runCommand(Shell.java:568) org.apache.hadoop.util.Shell.run(Shell.java:479) org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773) org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) java.util.concurrent.FutureTask.run(FutureTask.java:262) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=0,queue=0,port=59396-SendThread(localhost:49480) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: ApplicationMasterLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59396-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1865151160-10.22.9.171-1471539927174:blk_1073742005_1181 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: DeletionService #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t12 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MASTER_TABLE_OPERATIONS-10.22.9.171:59396-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Client (1317276040) connection to /10.22.9.171:60255 from tyu java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:933) org.apache.hadoop.ipc.Client$Connection.run(Client.java:978) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:59399-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,59399,1471539932874' subprocedure-pool3-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_669446424_1 at /127.0.0.1:60125 [Receiving block BP-1865151160-10.22.9.171-1471539927174:blk_1073742005_1181] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59396-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1843809397_1 at /127.0.0.1:60470 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=4,queue=0,port=59396-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: ContainersLauncher #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: group-cache-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59396-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=2,queue=0,port=59396-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59441-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t11 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MoveIntermediateToDone Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1865151160-10.22.9.171-1471539927174:blk_1073742003_1179, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=2,queue=0,port=59396-SendThread(localhost:49480) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: LogDeleter #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,59396,1471539932179)-backup-pool19-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: (10.22.9.171,59396,1471539932179)-proc-coordinator-pool8-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1865151160-10.22.9.171-1471539927174:blk_1073742006_1182, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,59396,1471539932179' subprocedure-pool2-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=0,queue=0,port=59396-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: AsyncRpcChannel-pool2-t15 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1865151160-10.22.9.171-1471539927174:blk_1073742004_1180, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1865151160-10.22.9.171-1471539927174:blk_1073742002_1178, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1865151160-10.22.9.171-1471539927174:blk_1073742005_1181, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t13 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Thread-5393 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.readLine(BufferedReader.java:317) java.io.BufferedReader.readLine(BufferedReader.java:382) org.apache.hadoop.util.Shell$1.run(Shell.java:547) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59437-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59396-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 block BP-1865151160-10.22.9.171-1471539927174:blk_1073742002_1178 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: member: '10.22.9.171,59396,1471539932179' subprocedure-pool4-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_669446424_1 at /127.0.0.1:60126 [Receiving block BP-1865151160-10.22.9.171-1471539927174:blk_1073742006_1182] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,59399,1471539932874)-backup-pool31-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1865151160-10.22.9.171-1471539927174:blk_1073742001_1177, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_669446424_1 at /127.0.0.1:60122 [Receiving block BP-1865151160-10.22.9.171-1471539927174:blk_1073742002_1178] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t16 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,59396,1471539932179)-backup-pool29-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1865151160-10.22.9.171-1471539927174:blk_1073742003_1179 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: AsyncRpcChannel-pool2-t10 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_669446424_1 at /127.0.0.1:60124 [Receiving block BP-1865151160-10.22.9.171-1471539927174:blk_1073742004_1180] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: (10.22.9.171,59396,1471539932179)-proc-coordinator-pool1-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-298210346_1 at /127.0.0.1:60123 [Receiving block BP-1865151160-10.22.9.171-1471539927174:blk_1073742003_1179] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59441-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=4,queue=0,port=59396-SendThread(localhost:49480) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: ContainersLauncher #6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1865151160-10.22.9.171-1471539927174:blk_1073742001_1177 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 block BP-1865151160-10.22.9.171-1471539927174:blk_1073742004_1180 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59396-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:59399-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1865151160-10.22.9.171-1471539927174:blk_1073742002_1178 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: ResponseProcessor for block BP-1865151160-10.22.9.171-1471539927174:blk_1073742004_1180 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: ContainersLauncher #5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540188183 block BP-1865151160-10.22.9.171-1471539927174:blk_1073742001_1177 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-298210346_1 at /127.0.0.1:60121 [Receiving block BP-1865151160-10.22.9.171-1471539927174:blk_1073742001_1177] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) - Thread LEAK? -, OpenFileDescriptor=1169 (was 1032) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=10240 (was 10240), SystemLoadAverage=636 (was 223) - SystemLoadAverage LEAK? -, ProcessCount=280 (was 273) - ProcessCount LEAK? -, AvailableMemoryMB=1152 (was 1310) 2016-08-18 10:11:05,993 WARN [main] hbase.ResourceChecker(135): Thread=888 is superior to 500 2016-08-18 10:11:05,993 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=1169 is superior to 1024 2016-08-18 10:11:06,046 INFO [IPC Server handler 9 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741914_1090 127.0.0.1:59389 2016-08-18 10:11:06,047 INFO [IPC Server handler 9 on 59388] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741917_1093 127.0.0.1:59389 2016-08-18 10:11:06,047 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-18 10:11:06,047 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d5541000b 2016-08-18 10:11:06,048 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:06,048 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (1937835936) to /10.22.9.171:59437 from tyu: closed 2016-08-18 10:11:06,048 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-18 10:11:06,048 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59437] ipc.RpcServer$Listener(912): RpcServer.listener,port=59437: DISCONNECTING client 10.22.9.171:59458 because read count=-1. Number of active connections: 2 2016-08-18 10:11:06,049 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-18 10:11:06,049 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-18 10:11:06,049 INFO [M:0;10.22.9.171:59437] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 10:11:06,049 INFO [SplitLogWorker-10.22.9.171:59437] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 10:11:06,050 INFO [SplitLogWorker-10.22.9.171:59437] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,59437,1471539940144 exiting 2016-08-18 10:11:06,050 INFO [M:0;10.22.9.171:59437] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 10:11:06,053 INFO [M:0;10.22.9.171:59437] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-18 10:11:06,053 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-18 10:11:06,053 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 10:11:06,053 INFO [M:0;10.22.9.171:59437] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-18 10:11:06,053 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 10:11:06,053 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-18 10:11:06,053 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-18 10:11:06,053 INFO [RS:0;10.22.9.171:59441] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 10:11:06,054 INFO [RS:0;10.22.9.171:59441] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 10:11:06,054 INFO [SplitLogWorker-10.22.9.171:59441] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 10:11:06,054 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 10:11:06,055 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 10:11:06,055 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-18 10:11:06,054 INFO [RS:0;10.22.9.171:59441] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 10:11:06,054 INFO [SplitLogWorker-10.22.9.171:59441] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,59441,1471539940207 exiting 2016-08-18 10:11:06,055 INFO [RS:0;10.22.9.171:59441] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 10:11:06,055 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-18 10:11:06,055 INFO [RS:0;10.22.9.171:59441] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 10:11:06,056 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1063): stopping server 10.22.9.171,59441,1471539940207 2016-08-18 10:11:06,056 DEBUG [RS:0;10.22.9.171:59441] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 10:11:06,056 DEBUG [RS_CLOSE_REGION-10.22.9.171:59441-0] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:11:06,056 INFO [RS:0;10.22.9.171:59441] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410009 2016-08-18 10:11:06,056 DEBUG [RS_CLOSE_REGION-10.22.9.171:59441-0] regionserver.HRegion(1419): Closing hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020.: disabling compactions & flushes 2016-08-18 10:11:06,056 DEBUG [RS_CLOSE_REGION-10.22.9.171:59441-0] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:11:06,057 INFO [StoreCloserThread-hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020.-1] regionserver.HStore(839): Closed meta 2016-08-18 10:11:06,057 INFO [StoreCloserThread-hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020.-1] regionserver.HStore(839): Closed session 2016-08-18 10:11:06,057 DEBUG [RS:0;10.22.9.171:59441] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:06,057 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1292): Waiting on 1 regions to close 2016-08-18 10:11:06,057 DEBUG [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1296): {f83c1e5a1081010f5215d68f80335020=hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020.} 2016-08-18 10:11:06,057 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-1.1471539944252 2016-08-18 10:11:06,064 DEBUG [RS_CLOSE_REGION-10.22.9.171:59441-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/backup/f83c1e5a1081010f5215d68f80335020/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 10:11:06,065 INFO [RS_CLOSE_REGION-10.22.9.171:59441-0] regionserver.HRegion(1552): Closed hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:11:06,065 DEBUG [RS_CLOSE_REGION-10.22.9.171:59441-0] handler.CloseRegionHandler(122): Closed hbase:backup,,1471539943364.f83c1e5a1081010f5215d68f80335020. 2016-08-18 10:11:06,093 INFO [master//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 10:11:06,093 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(146): master//10.22.9.171:0.leaseChecker closing leases 2016-08-18 10:11:06,093 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.9.171:0.leaseChecker closing leases 2016-08-18 10:11:06,093 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.9.171:0.leaseChecker closed leases 2016-08-18 10:11:06,093 INFO [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 10:11:06,093 INFO [RS_OPEN_META-10.22.9.171:59437-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 10:11:06,093 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(149): master//10.22.9.171:0.leaseChecker closed leases 2016-08-18 10:11:06,108 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 465 2016-08-18 10:11:06,110 INFO [M:0;10.22.9.171:59437] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 10:11:06,110 INFO [M:0;10.22.9.171:59437] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 10:11:06,110 INFO [M:0;10.22.9.171:59437] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 10:11:06,110 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1063): stopping server 10.22.9.171,59437,1471539940144 2016-08-18 10:11:06,111 DEBUG [M:0;10.22.9.171:59437] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 10:11:06,111 DEBUG [RS_CLOSE_REGION-10.22.9.171:59437-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:11:06,111 INFO [M:0;10.22.9.171:59437] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410008 2016-08-18 10:11:06,111 DEBUG [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.HRegion(1419): Closing hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748.: disabling compactions & flushes 2016-08-18 10:11:06,111 DEBUG [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:11:06,111 INFO [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=344 B 2016-08-18 10:11:06,112 DEBUG [M:0;10.22.9.171:59437] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:06,112 INFO [M:0;10.22.9.171:59437] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 10:11:06,112 INFO [M:0;10.22.9.171:59437] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 10:11:06,112 INFO [M:0;10.22.9.171:59437] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 10:11:06,112 INFO [M:0;10.22.9.171:59437] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 10:11:06,112 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:11:06,112 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-1226668208) to /10.22.9.171:59441 from tyu: closed 2016-08-18 10:11:06,112 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59441] ipc.RpcServer$Listener(912): RpcServer.listener,port=59441: DISCONNECTING client 10.22.9.171:59464 because read count=-1. Number of active connections: 1 2016-08-18 10:11:06,113 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-18 10:11:06,113 DEBUG [M:0;10.22.9.171:59437] regionserver.HRegionServer(1296): {1588230740=hbase:meta,,1.1588230740, 880bec924ffe1f47e306a99e52984748=hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748.} 2016-08-18 10:11:06,113 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-18 10:11:06,114 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-18 10:11:06,114 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-18 10:11:06,114 INFO [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=4.02 KB 2016-08-18 10:11:06,114 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:11:06,123 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741839_1015{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 4912 2016-08-18 10:11:06,124 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 6350 2016-08-18 10:11:06,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 228296ms 2016-08-18 10:11:06,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 82437ms 2016-08-18 10:11:06,261 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1091): stopping server 10.22.9.171,59441,1471539940207; all regions closed. 2016-08-18 10:11:06,262 DEBUG [RS:0;10.22.9.171:59441] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207 2016-08-18 10:11:06,262 DEBUG [RS:0;10.22.9.171:59441] wal.FSHLog(1090): closing hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-1.1471539944252 2016-08-18 10:11:06,267 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 669 2016-08-18 10:11:06,274 INFO [10.22.9.171,59437,1471539940144_splitLogManager__ChoreService_1] hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped 2016-08-18 10:11:06,323 INFO [10.22.9.171,59437,1471539940144_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,59437,1471539940144-MemstoreFlusherChore was stopped 2016-08-18 10:11:06,377 INFO [10.22.9.171,59441,1471539940207_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,59441,1471539940207-MemstoreFlusherChore was stopped 2016-08-18 10:11:06,527 INFO [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=3.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/.tmp/a8d47bfa737c440a972cbf811373f9f8 2016-08-18 10:11:06,527 INFO [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=6, memsize=344, hasBloomFilter=true, into tmp file hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/.tmp/2c16eaffe4f646eeb7e22590e32bde99 2016-08-18 10:11:06,540 DEBUG [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/.tmp/2c16eaffe4f646eeb7e22590e32bde99 as hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info/2c16eaffe4f646eeb7e22590e32bde99 2016-08-18 10:11:06,546 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741841_1017{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 4846 2016-08-18 10:11:06,546 INFO [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.HStore(934): Added hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/info/2c16eaffe4f646eeb7e22590e32bde99, entries=2, sequenceid=6, filesize=4.8 K 2016-08-18 10:11:06,547 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:11:06,547 INFO [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.HRegion(2545): Finished memstore flush of ~344 B/344, currentsize=0 B/0 for region hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. in 436ms, sequenceid=6, compaction requested=false 2016-08-18 10:11:06,548 INFO [StoreCloserThread-hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748.-1] regionserver.HStore(839): Closed info 2016-08-18 10:11:06,549 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:11:06,553 DEBUG [RS_CLOSE_REGION-10.22.9.171:59437-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/namespace/880bec924ffe1f47e306a99e52984748/recovered.edits/9.seqid to file, newSeqId=9, maxSeqId=2 2016-08-18 10:11:06,554 INFO [RS_CLOSE_REGION-10.22.9.171:59437-0] regionserver.HRegion(1552): Closed hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:11:06,555 DEBUG [RS_CLOSE_REGION-10.22.9.171:59437-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1471539940601.880bec924ffe1f47e306a99e52984748. 2016-08-18 10:11:06,671 DEBUG [RS:0;10.22.9.171:59441] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:11:06,671 INFO [RS:0;10.22.9.171:59441] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59441%2C1471539940207.regiongroup-1:(num 1471539944252) 2016-08-18 10:11:06,671 DEBUG [RS:0;10.22.9.171:59441] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207 2016-08-18 10:11:06,671 DEBUG [RS:0;10.22.9.171:59441] wal.FSHLog(1090): closing hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59441,1471539940207/10.22.9.171%2C59441%2C1471539940207.regiongroup-0.1471539942383 2016-08-18 10:11:06,679 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741835_1011{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 91 2016-08-18 10:11:06,949 INFO [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=704, hasBloomFilter=false, into tmp file hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/.tmp/85dabff92548401c828e6e7e0fba8142 2016-08-18 10:11:06,956 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/.tmp/a8d47bfa737c440a972cbf811373f9f8 as hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info/a8d47bfa737c440a972cbf811373f9f8 2016-08-18 10:11:06,962 INFO [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HStore(934): Added hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/info/a8d47bfa737c440a972cbf811373f9f8, entries=14, sequenceid=15, filesize=6.2 K 2016-08-18 10:11:06,963 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/.tmp/85dabff92548401c828e6e7e0fba8142 as hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table/85dabff92548401c828e6e7e0fba8142 2016-08-18 10:11:06,969 INFO [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HStore(934): Added hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/table/85dabff92548401c828e6e7e0fba8142, entries=4, sequenceid=15, filesize=4.7 K 2016-08-18 10:11:06,970 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:11:06,970 INFO [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HRegion(2545): Finished memstore flush of ~4.02 KB/4112, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 856ms, sequenceid=15, compaction requested=false 2016-08-18 10:11:06,972 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-18 10:11:06,972 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-18 10:11:06,973 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:11:06,977 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/data/hbase/meta/1588230740/recovered.edits/18.seqid to file, newSeqId=18, maxSeqId=3 2016-08-18 10:11:06,978 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-18 10:11:06,979 INFO [RS_CLOSE_META-10.22.9.171:59437-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-18 10:11:06,979 DEBUG [RS_CLOSE_META-10.22.9.171:59437-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-18 10:11:07,086 DEBUG [RS:0;10.22.9.171:59441] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:11:07,086 INFO [RS:0;10.22.9.171:59441] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59441%2C1471539940207.regiongroup-0:(num 1471539942383) 2016-08-18 10:11:07,086 DEBUG [RS:0;10.22.9.171:59441] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:07,086 INFO [RS:0;10.22.9.171:59441] regionserver.Leases(146): RS:0;10.22.9.171:59441 closing leases 2016-08-18 10:11:07,086 INFO [RS:0;10.22.9.171:59441] regionserver.Leases(149): RS:0;10.22.9.171:59441 closed leases 2016-08-18 10:11:07,086 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59437] ipc.RpcServer$Listener(912): RpcServer.listener,port=59437: DISCONNECTING client 10.22.9.171:59447 because read count=-1. Number of active connections: 1 2016-08-18 10:11:07,086 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (255324460) to /10.22.9.171:59437 from tyu.hfs.1: closed 2016-08-18 10:11:07,086 INFO [RS:0;10.22.9.171:59441] hbase.ChoreService(323): Chore service for: 10.22.9.171,59441,1471539940207 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,59441,1471539940207 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-18 10:11:07,086 INFO [RS:0;10.22.9.171:59441] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 10:11:07,087 INFO [RS:0;10.22.9.171:59441] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 10:11:07,087 INFO [RS:0;10.22.9.171:59441] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 10:11:07,087 INFO [RS:0;10.22.9.171:59441] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 10:11:07,090 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:11:07,090 INFO [RS:0;10.22.9.171:59441] ipc.RpcServer(2336): Stopping server on 59441 2016-08-18 10:11:07,090 INFO [RpcServer.listener,port=59441] ipc.RpcServer$Listener(816): RpcServer.listener,port=59441: stopping 2016-08-18 10:11:07,091 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 10:11:07,091 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 10:11:07,092 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:11:07,092 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,59441,1471539940207 2016-08-18 10:11:07,092 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,59441,1471539940207] 2016-08-18 10:11:07,092 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59441-0x1569e9d55410007, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-18 10:11:07,093 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.9.171,59441,1471539940207 expired; onlineServers=1 2016-08-18 10:11:07,093 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-18 10:11:07,093 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1135): stopping server 10.22.9.171,59441,1471539940207; zookeeper connection closed. 2016-08-18 10:11:07,094 INFO [RS:0;10.22.9.171:59441] regionserver.HRegionServer(1138): RS:0;10.22.9.171:59441 exiting 2016-08-18 10:11:07,094 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@400b2683] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@400b2683 2016-08-18 10:11:07,094 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-18 10:11:07,133 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1091): stopping server 10.22.9.171,59437,1471539940144; all regions closed. 2016-08-18 10:11:07,133 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta 2016-08-18 10:11:07,133 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1090): closing hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144.meta/10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0.1471539940372 2016-08-18 10:11:07,137 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 3024 2016-08-18 10:11:07,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 98507ms 2016-08-18 10:11:07,233 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 225780ms 2016-08-18 10:11:07,259 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5def6c5c] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:59389 to delete [blk_1073742029_1205, blk_1073742030_1206, blk_1073742031_1207, blk_1073742032_1208, blk_1073742033_1209, blk_1073742034_1210, blk_1073742035_1211, blk_1073742036_1212, blk_1073742037_1213, blk_1073742038_1214, blk_1073742039_1215, blk_1073742040_1216, blk_1073742041_1217, blk_1073741914_1090, blk_1073742042_1218, blk_1073742043_1219, blk_1073742044_1220, blk_1073741917_1093, blk_1073742045_1221, blk_1073742046_1222, blk_1073742047_1223] 2016-08-18 10:11:07,545 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:11:07,545 INFO [M:0;10.22.9.171:59437] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59437%2C1471539940144.meta.regiongroup-0:(num 1471539940372) 2016-08-18 10:11:07,545 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144 2016-08-18 10:11:07,545 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1090): closing hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-0.1471539941371 2016-08-18 10:11:07,551 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741833_1009{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-564fd608-c77e-48a6-a605-76fa80892254:NORMAL:127.0.0.1:59428|RBW]]} size 91 2016-08-18 10:11:07,962 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:11:07,962 INFO [M:0;10.22.9.171:59437] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59437%2C1471539940144.regiongroup-0:(num 1471539941371) 2016-08-18 10:11:07,962 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144 2016-08-18 10:11:07,962 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1090): closing hdfs://localhost:59425/user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/WALs/10.22.9.171,59437,1471539940144/10.22.9.171%2C59437%2C1471539940144.regiongroup-1.1471539941503 2016-08-18 10:11:07,968 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59428 is added to blk_1073741834_1010{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ba1efc1a-a7d5-4a14-871e-01b29f9ed525:NORMAL:127.0.0.1:59428|RBW]]} size 1383 2016-08-18 10:11:08,256 INFO [10.22.9.171,59399,1471539932874_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59399,1471539932874-MemstoreFlusherChore requesting flush of hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. because meta has an old edit so flush to free WALs after random delay 8330ms 2016-08-18 10:11:08,275 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. because info has an old edit so flush to free WALs after random delay 285993ms 2016-08-18 10:11:08,275 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher(1617): 10.22.9.171,59396,1471539932179-MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 131566ms 2016-08-18 10:11:08,378 DEBUG [M:0;10.22.9.171:59437] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/c0be87cc-0240-4a70-9e8a-a152837b6437/oldWALs 2016-08-18 10:11:08,378 INFO [M:0;10.22.9.171:59437] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59437%2C1471539940144.regiongroup-1:(num 1471539941503) 2016-08-18 10:11:08,378 DEBUG [M:0;10.22.9.171:59437] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:08,378 INFO [M:0;10.22.9.171:59437] regionserver.Leases(146): M:0;10.22.9.171:59437 closing leases 2016-08-18 10:11:08,378 INFO [M:0;10.22.9.171:59437] regionserver.Leases(149): M:0;10.22.9.171:59437 closed leases 2016-08-18 10:11:08,379 INFO [M:0;10.22.9.171:59437] hbase.ChoreService(323): Chore service for: 10.22.9.171,59437,1471539940144 had [[ScheduledChore: Name: 10.22.9.171,59437,1471539940144-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59437,1471539940144-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59437,1471539940144-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,59437,1471539940144 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59437,1471539940144-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: CatalogJanitor-10.22.9.171:59437 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59437,1471539940144-BalancerChore Period: 300000 Unit: MILLISECONDS]] on shutdown 2016-08-18 10:11:08,383 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.9.171,59437,1471539940144 2016-08-18 10:11:08,384 INFO [M:0;10.22.9.171:59437] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-18 10:11:08,384 INFO [M:0;10.22.9.171:59437] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-18 10:11:08,384 DEBUG [M:0;10.22.9.171:59437] master.HMaster(1127): Stopping service threads 2016-08-18 10:11:08,385 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/master 2016-08-18 10:11:08,385 INFO [M:0;10.22.9.171:59437] hbase.ChoreService(323): Chore service for: 10.22.9.171,59437,1471539940144_splitLogManager_ had [] on shutdown 2016-08-18 10:11:08,386 INFO [M:0;10.22.9.171:59437] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-18 10:11:08,386 INFO [M:0;10.22.9.171:59437] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-18 10:11:08,386 INFO [M:0;10.22.9.171:59437] ipc.RpcServer(2336): Stopping server on 59437 2016-08-18 10:11:08,386 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-08-18 10:11:08,386 INFO [RpcServer.listener,port=59437] ipc.RpcServer$Listener(816): RpcServer.listener,port=59437: stopping 2016-08-18 10:11:08,386 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 10:11:08,387 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 10:11:08,388 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59437-0x1569e9d55410006, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,59437,1471539940144 2016-08-18 10:11:08,388 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,59437,1471539940144] 2016-08-18 10:11:08,389 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1135): stopping server 10.22.9.171,59437,1471539940144; zookeeper connection closed. 2016-08-18 10:11:08,389 INFO [M:0;10.22.9.171:59437] regionserver.HRegionServer(1138): M:0;10.22.9.171:59437 exiting 2016-08-18 10:11:08,389 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-18 10:11:08,397 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 10:11:08,502 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/dfscluster_8e70a1d2-0197-4e0b-ad8b-c57c3755930d/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/dfscluster_8e70a1d2-0197-4e0b-ad8b-c57c3755930d/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:59425] datanode.BPServiceActor(704): BPOfferService for Block pool BP-666724216-10.22.9.171-1471539939691 (Datanode Uuid 973a9a42-4f2d-41df-b94f-b6002d2955b4) service to localhost/127.0.0.1:59425 interrupted 2016-08-18 10:11:08,502 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/dfscluster_8e70a1d2-0197-4e0b-ad8b-c57c3755930d/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/f4b87605-fcf1-4c38-ae2f-92b28993e346/dfscluster_8e70a1d2-0197-4e0b-ad8b-c57c3755930d/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:59425] datanode.BPServiceActor(835): Ending block pool service for: Block pool BP-666724216-10.22.9.171-1471539939691 (Datanode Uuid 973a9a42-4f2d-41df-b94f-b6002d2955b4) service to localhost/127.0.0.1:59425 2016-08-18 10:11:08,560 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 10:11:08,694 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-18 10:11:08,694 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-18 10:11:08,694 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 10:11:08,694 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410005 2016-08-18 10:11:08,695 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:08,695 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-18 10:11:08,695 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-885862923) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:11:08,695 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-18 10:11:08,695 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59489 because read count=-1. Number of active connections: 9 2016-08-18 10:11:08,695 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59579 because read count=-1. Number of active connections: 6 2016-08-18 10:11:08,695 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-1662960578) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:11:08,696 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-18 10:11:08,696 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59423 because read count=-1. Number of active connections: 9 2016-08-18 10:11:08,696 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-56603400) to /10.22.9.171:59396 from tyu: closed 2016-08-18 10:11:08,697 INFO [M:0;10.22.9.171:59396] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 10:11:08,697 INFO [M:0;10.22.9.171:59396] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 10:11:08,697 INFO [SplitLogWorker-10.22.9.171:59396] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 10:11:08,697 INFO [SplitLogWorker-10.22.9.171:59396] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,59396,1471539932179 exiting 2016-08-18 10:11:08,697 INFO [M:0;10.22.9.171:59396] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-18 10:11:08,697 INFO [M:0;10.22.9.171:59396] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-18 10:11:08,698 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 10:11:08,698 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 10:11:08,698 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-18 10:11:08,698 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-18 10:11:08,698 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-18 10:11:08,699 INFO [RS:0;10.22.9.171:59399] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 10:11:08,699 INFO [RS:0;10.22.9.171:59399] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 10:11:08,700 INFO [RS:0;10.22.9.171:59399] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 10:11:08,700 INFO [RS:0;10.22.9.171:59399] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 10:11:08,700 INFO [RS:0;10.22.9.171:59399] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 10:11:08,700 INFO [SplitLogWorker-10.22.9.171:59399] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 10:11:08,700 INFO [SplitLogWorker-10.22.9.171:59399] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,59399,1471539932874 exiting 2016-08-18 10:11:08,700 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1063): stopping server 10.22.9.171,59399,1471539932874 2016-08-18 10:11:08,701 DEBUG [RS:0;10.22.9.171:59399] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 10:11:08,701 INFO [RS:0;10.22.9.171:59399] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410003 2016-08-18 10:11:08,701 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-18 10:11:08,701 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 10:11:08,701 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 10:11:08,701 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-18 10:11:08,701 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(90): Processing close of ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:11:08,701 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] handler.CloseRegionHandler(90): Processing close of ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:11:08,702 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1419): Closing ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843.: disabling compactions & flushes 2016-08-18 10:11:08,703 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1446): Updates disabled for region ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:11:08,701 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(90): Processing close of ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:11:08,703 INFO [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=16.24 KB 2016-08-18 10:11:08,702 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.: disabling compactions & flushes 2016-08-18 10:11:08,701 DEBUG [RS:0;10.22.9.171:59399] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:08,703 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:11:08,703 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1419): Closing ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7.: disabling compactions & flushes 2016-08-18 10:11:08,704 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59524 because read count=-1. Number of active connections: 7 2016-08-18 10:11:08,704 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (1218855937) to /10.22.9.171:59396 from tyu.hfs.0: closed 2016-08-18 10:11:08,703 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1292): Waiting on 9 regions to close 2016-08-18 10:11:08,704 DEBUG [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1296): {b3b808604c7a4b394d3cdc0636a4d8d7=ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7., 3c1d62f1b34f7382cb57de1ded772843=ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843., 1b9df2550cafc7710dd1c6ec60242385=ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385., 12e7d6010d0ab46d9061da5bf6f5e4b7=ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7., b61ab1f232defc5aa4ae331a63c6cdd7=ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7., 1147a0b47ba2d478b911f466b29f0fc3=ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3., 36ac3931d4f13816604ff9289aebc876=ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876., 97fff8dc57d09226ac34540d2bf674e4=hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4., ce195e475d29c825c7b292e0d7918bf9=ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.} 2016-08-18 10:11:08,704 INFO [StoreCloserThread-ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:08,704 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:11:08,704 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1446): Updates disabled for region ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:11:08,705 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:11:08,705 INFO [StoreCloserThread-ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:08,706 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540189032 2016-08-18 10:11:08,712 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/table4_restore/1b9df2550cafc7710dd1c6ec60242385/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 10:11:08,712 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/test-14715399571412/b3b808604c7a4b394d3cdc0636a4d8d7/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-18 10:11:08,713 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:11:08,713 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1552): Closed ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:11:08,713 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(122): Closed ns4:table4_restore,,1471540042499.1b9df2550cafc7710dd1c6ec60242385. 2016-08-18 10:11:08,713 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(122): Closed ns3:test-14715399571412,,1471539963066.b3b808604c7a4b394d3cdc0636a4d8d7. 2016-08-18 10:11:08,713 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(90): Processing close of ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:11:08,714 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:11:08,714 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1419): Closing ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7.: disabling compactions & flushes 2016-08-18 10:11:08,714 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.: disabling compactions & flushes 2016-08-18 10:11:08,714 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1446): Updates disabled for region ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:11:08,714 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:11:08,714 INFO [StoreCloserThread-ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:08,715 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:11:08,715 INFO [StoreCloserThread-ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:08,716 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:11:08,721 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns4/test-14715399571413/12e7d6010d0ab46d9061da5bf6f5e4b7/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-18 10:11:08,721 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/table2_restore/b61ab1f232defc5aa4ae331a63c6cdd7/recovered.edits/8.seqid to file, newSeqId=8, maxSeqId=2 2016-08-18 10:11:08,722 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1552): Closed ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:11:08,722 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(122): Closed ns4:test-14715399571413,,1471539965335.12e7d6010d0ab46d9061da5bf6f5e4b7. 2016-08-18 10:11:08,722 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(90): Processing close of ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:11:08,722 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1419): Closing ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3.: disabling compactions & flushes 2016-08-18 10:11:08,722 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1446): Updates disabled for region ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:11:08,722 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=840 B 2016-08-18 10:11:08,723 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:11:08,723 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1471540037482.b61ab1f232defc5aa4ae331a63c6cdd7. 2016-08-18 10:11:08,723 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:11:08,723 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:11:08,723 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742051_1227{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 8678 2016-08-18 10:11:08,723 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.: disabling compactions & flushes 2016-08-18 10:11:08,724 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:11:08,724 INFO [StoreCloserThread-ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:08,724 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540189032 2016-08-18 10:11:08,727 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 465 2016-08-18 10:11:08,728 INFO [M:0;10.22.9.171:59396] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 10:11:08,728 INFO [M:0;10.22.9.171:59396] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 10:11:08,728 INFO [M:0;10.22.9.171:59396] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 10:11:08,728 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1063): stopping server 10.22.9.171,59396,1471539932179 2016-08-18 10:11:08,728 DEBUG [M:0;10.22.9.171:59396] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 10:11:08,728 DEBUG [RS_CLOSE_REGION-10.22.9.171:59396-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:11:08,729 INFO [M:0;10.22.9.171:59396] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569e9d55410002 2016-08-18 10:11:08,729 DEBUG [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.HRegion(1419): Closing hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4.: disabling compactions & flushes 2016-08-18 10:11:08,729 DEBUG [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:11:08,729 INFO [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=1016 B 2016-08-18 10:11:08,729 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns3/table3_restore/36ac3931d4f13816604ff9289aebc876/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 10:11:08,730 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:08,730 INFO [M:0;10.22.9.171:59396] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 10:11:08,730 INFO [M:0;10.22.9.171:59396] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 10:11:08,730 INFO [M:0;10.22.9.171:59396] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 10:11:08,730 INFO [M:0;10.22.9.171:59396] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 10:11:08,730 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-918843554) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:11:08,730 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 2016-08-18 10:11:08,730 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-888545788) to /10.22.9.171:59399 from tyu: closed 2016-08-18 10:11:08,730 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59520 because read count=-1. Number of active connections: 5 2016-08-18 10:11:08,731 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-18 10:11:08,730 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-18 10:11:08,731 DEBUG [M:0;10.22.9.171:59396] regionserver.HRegionServer(1296): {83a4988679dc2f377c4e4a129e3ecec4=hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4., 1588230740=hbase:meta,,1.1588230740} 2016-08-18 10:11:08,731 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-18 10:11:08,731 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59399] ipc.RpcServer$Listener(912): RpcServer.listener,port=59399: DISCONNECTING client 10.22.9.171:59434 because read count=-1. Number of active connections: 4 2016-08-18 10:11:08,732 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-18 10:11:08,732 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:11:08,732 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=31.65 KB 2016-08-18 10:11:08,732 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1471540040239.36ac3931d4f13816604ff9289aebc876. 2016-08-18 10:11:08,733 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:11:08,733 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1419): Closing hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4.: disabling compactions & flushes 2016-08-18 10:11:08,733 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:11:08,734 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=26.43 KB 2016-08-18 10:11:08,734 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:11:08,734 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742052_1228{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 5011 2016-08-18 10:11:08,734 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:11:08,742 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742053_1229{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:11:08,742 INFO [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=10, memsize=1016, hasBloomFilter=true, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/.tmp/ebc675cce905492eb3775e4c0967595b 2016-08-18 10:11:08,745 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742054_1230{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 14560 2016-08-18 10:11:08,746 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742055_1231{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:11:08,748 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=87, memsize=26.7 K, hasBloomFilter=false, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/.tmp/3655051ebc804c20b165c3046a95aa42 2016-08-18 10:11:08,750 DEBUG [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/.tmp/ebc675cce905492eb3775e4c0967595b as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/info/ebc675cce905492eb3775e4c0967595b 2016-08-18 10:11:08,753 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3655051ebc804c20b165c3046a95aa42 2016-08-18 10:11:08,756 INFO [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/info/ebc675cce905492eb3775e4c0967595b, entries=6, sequenceid=10, filesize=4.9 K 2016-08-18 10:11:08,757 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 2016-08-18 10:11:08,757 INFO [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.HRegion(2545): Finished memstore flush of ~1016 B/1016, currentsize=0 B/0 for region hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. in 28ms, sequenceid=10, compaction requested=false 2016-08-18 10:11:08,758 INFO [StoreCloserThread-hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4.-1] regionserver.HStore(839): Closed info 2016-08-18 10:11:08,758 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 2016-08-18 10:11:08,761 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742056_1232{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 0 2016-08-18 10:11:08,761 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=87, memsize=5.0 K, hasBloomFilter=false, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/.tmp/80f7ada083394f0b8063bc9e87ecb9a2 2016-08-18 10:11:08,762 DEBUG [RS_CLOSE_REGION-10.22.9.171:59396-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/namespace/83a4988679dc2f377c4e4a129e3ecec4/recovered.edits/13.seqid to file, newSeqId=13, maxSeqId=2 2016-08-18 10:11:08,763 INFO [RS_CLOSE_REGION-10.22.9.171:59396-0] regionserver.HRegion(1552): Closed hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:11:08,763 DEBUG [RS_CLOSE_REGION-10.22.9.171:59396-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1471539937180.83a4988679dc2f377c4e4a129e3ecec4. 2016-08-18 10:11:08,767 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/.tmp/3655051ebc804c20b165c3046a95aa42 as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/info/3655051ebc804c20b165c3046a95aa42 2016-08-18 10:11:08,772 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3655051ebc804c20b165c3046a95aa42 2016-08-18 10:11:08,773 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/info/3655051ebc804c20b165c3046a95aa42, entries=110, sequenceid=87, filesize=17.6 K 2016-08-18 10:11:08,773 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/.tmp/80f7ada083394f0b8063bc9e87ecb9a2 as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/table/80f7ada083394f0b8063bc9e87ecb9a2 2016-08-18 10:11:08,779 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/table/80f7ada083394f0b8063bc9e87ecb9a2, entries=28, sequenceid=87, filesize=5.9 K 2016-08-18 10:11:08,779 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:11:08,780 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HRegion(2545): Finished memstore flush of ~31.65 KB/32408, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 48ms, sequenceid=87, compaction requested=false 2016-08-18 10:11:08,781 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-18 10:11:08,781 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-18 10:11:08,782 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:11:08,785 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/meta/1588230740/recovered.edits/90.seqid to file, newSeqId=90, maxSeqId=3 2016-08-18 10:11:08,785 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-18 10:11:08,786 INFO [RS_CLOSE_META-10.22.9.171:59396-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-18 10:11:08,786 DEBUG [RS_CLOSE_META-10.22.9.171:59396-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-18 10:11:08,843 INFO [10.22.9.171,59396,1471539932179_splitLogManager__ChoreService_1] hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped 2016-08-18 10:11:08,936 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1091): stopping server 10.22.9.171,59396,1471539932179; all regions closed. 2016-08-18 10:11:08,936 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta 2016-08-18 10:11:08,936 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1090): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179.meta/10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0.1471539935694 2016-08-18 10:11:08,949 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 18511 2016-08-18 10:11:09,040 INFO [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 10:11:09,098 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(146): master//10.22.9.171:0.leaseChecker closing leases 2016-08-18 10:11:09,098 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.9.171:0.leaseChecker closing leases 2016-08-18 10:11:09,098 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(149): master//10.22.9.171:0.leaseChecker closed leases 2016-08-18 10:11:09,098 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.9.171:0.leaseChecker closed leases 2016-08-18 10:11:09,126 INFO [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=205, memsize=16.2 K, hasBloomFilter=true, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/.tmp/e86567f388624375af5902bb20a30ccb 2016-08-18 10:11:09,133 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/.tmp/e86567f388624375af5902bb20a30ccb as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/e86567f388624375af5902bb20a30ccb 2016-08-18 10:11:09,137 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=111, memsize=840, hasBloomFilter=true, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/.tmp/e5ad1ff68fbb4d4a83cad0ecdf9ad558 2016-08-18 10:11:09,139 INFO [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/f/e86567f388624375af5902bb20a30ccb, entries=99, sequenceid=205, filesize=8.5 K 2016-08-18 10:11:09,140 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:11:09,140 INFO [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(2545): Finished memstore flush of ~16.24 KB/16632, currentsize=0 B/0 for region ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. in 437ms, sequenceid=205, compaction requested=false 2016-08-18 10:11:09,142 INFO [StoreCloserThread-ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:09,142 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:11:09,144 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/.tmp/e5ad1ff68fbb4d4a83cad0ecdf9ad558 as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/e5ad1ff68fbb4d4a83cad0ecdf9ad558 2016-08-18 10:11:09,145 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/test-1471539957141/3c1d62f1b34f7382cb57de1ded772843/recovered.edits/208.seqid to file, newSeqId=208, maxSeqId=2 2016-08-18 10:11:09,146 INFO [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1552): Closed ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:11:09,146 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] handler.CloseRegionHandler(122): Closed ns1:test-1471539957141,,1471539960227.3c1d62f1b34f7382cb57de1ded772843. 2016-08-18 10:11:09,146 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:11:09,146 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.: disabling compactions & flushes 2016-08-18 10:11:09,147 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:11:09,147 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=29, memsize=20.6 K, hasBloomFilter=true, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/.tmp/9ced3381cf6f4e0caf6d3906265883e3 2016-08-18 10:11:09,149 INFO [StoreCloserThread-ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:09,150 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:11:09,152 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/f/e5ad1ff68fbb4d4a83cad0ecdf9ad558, entries=5, sequenceid=111, filesize=4.9 K 2016-08-18 10:11:09,152 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:11:09,153 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(2545): Finished memstore flush of ~840 B/840, currentsize=0 B/0 for region ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. in 431ms, sequenceid=111, compaction requested=false 2016-08-18 10:11:09,153 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns1/table1_restore/ce195e475d29c825c7b292e0d7918bf9/recovered.edits/8.seqid to file, newSeqId=8, maxSeqId=2 2016-08-18 10:11:09,154 INFO [StoreCloserThread-ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3.-1] regionserver.HStore(839): Closed f 2016-08-18 10:11:09,154 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:11:09,155 INFO [RS_CLOSE_REGION-10.22.9.171:59399-2] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:11:09,155 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-2] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1471540034697.ce195e475d29c825c7b292e0d7918bf9. 2016-08-18 10:11:09,160 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/ns2/test-14715399571411/1147a0b47ba2d478b911f466b29f0fc3/recovered.edits/114.seqid to file, newSeqId=114, maxSeqId=2 2016-08-18 10:11:09,160 INFO [RS_CLOSE_REGION-10.22.9.171:59399-0] regionserver.HRegion(1552): Closed ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:11:09,161 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-0] handler.CloseRegionHandler(122): Closed ns2:test-14715399571411,,1471539961670.1147a0b47ba2d478b911f466b29f0fc3. 2016-08-18 10:11:09,163 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742057_1233{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 6910 2016-08-18 10:11:09,220 INFO [master//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 10:11:09,242 INFO [10.22.9.171,59396,1471539932179_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,59396,1471539932179-MemstoreFlusherChore was stopped 2016-08-18 10:11:09,252 INFO [10.22.9.171,59399,1471539932874_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,59399,1471539932874-MemstoreFlusherChore was stopped 2016-08-18 10:11:09,326 INFO [RS_OPEN_META-10.22.9.171:59396-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 10:11:09,359 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:11:09,359 INFO [M:0;10.22.9.171:59396] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59396%2C1471539932179.meta.regiongroup-0:(num 1471539935694) 2016-08-18 10:11:09,360 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179 2016-08-18 10:11:09,360 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1090): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-1.1471540188604 2016-08-18 10:11:09,363 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742003_1179{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 893 2016-08-18 10:11:09,568 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=29, memsize=5.8 K, hasBloomFilter=true, into tmp file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/.tmp/b7e7eefa4aee42eab863c0e0b475c0f1 2016-08-18 10:11:09,577 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/.tmp/9ced3381cf6f4e0caf6d3906265883e3 as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/meta/9ced3381cf6f4e0caf6d3906265883e3 2016-08-18 10:11:09,583 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/meta/9ced3381cf6f4e0caf6d3906265883e3, entries=59, sequenceid=29, filesize=14.2 K 2016-08-18 10:11:09,584 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/.tmp/b7e7eefa4aee42eab863c0e0b475c0f1 as hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/session/b7e7eefa4aee42eab863c0e0b475c0f1 2016-08-18 10:11:09,590 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HStore(934): Added hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/session/b7e7eefa4aee42eab863c0e0b475c0f1, entries=3, sequenceid=29, filesize=6.7 K 2016-08-18 10:11:09,591 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:11:09,591 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(2545): Finished memstore flush of ~26.43 KB/27064, currentsize=0 B/0 for region hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. in 858ms, sequenceid=29, compaction requested=false 2016-08-18 10:11:09,592 INFO [StoreCloserThread-hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4.-1] regionserver.HStore(839): Closed meta 2016-08-18 10:11:09,593 INFO [StoreCloserThread-hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4.-1] regionserver.HStore(839): Closed session 2016-08-18 10:11:09,594 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:11:09,598 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/data/hbase/backup/97fff8dc57d09226ac34540d2bf674e4/recovered.edits/32.seqid to file, newSeqId=32, maxSeqId=2 2016-08-18 10:11:09,599 INFO [RS_CLOSE_REGION-10.22.9.171:59399-1] regionserver.HRegion(1552): Closed hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:11:09,599 DEBUG [RS_CLOSE_REGION-10.22.9.171:59399-1] handler.CloseRegionHandler(122): Closed hbase:backup,,1471539939627.97fff8dc57d09226ac34540d2bf674e4. 2016-08-18 10:11:09,716 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1091): stopping server 10.22.9.171,59399,1471539932874; all regions closed. 2016-08-18 10:11:09,717 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874 2016-08-18 10:11:09,717 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1090): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-0.1471540189032 2016-08-18 10:11:09,722 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742005_1181{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 668 2016-08-18 10:11:09,773 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:11:09,773 INFO [M:0;10.22.9.171:59396] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59396%2C1471539932179.regiongroup-1:(num 1471540188604) 2016-08-18 10:11:09,773 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179 2016-08-18 10:11:09,773 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1090): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59396,1471539932179/10.22.9.171%2C59396%2C1471539932179.regiongroup-0.1471540188183 2016-08-18 10:11:09,778 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742001_1177{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e0680069-b93f-4c56-b218-5416e527e484:NORMAL:127.0.0.1:59389|RBW]]} size 91 2016-08-18 10:11:10,129 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:11:10,129 INFO [RS:0;10.22.9.171:59399] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59399%2C1471539932874.regiongroup-0:(num 1471540189032) 2016-08-18 10:11:10,129 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874 2016-08-18 10:11:10,129 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1090): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-3.1471540188605 2016-08-18 10:11:10,133 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742004_1180{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1228 2016-08-18 10:11:10,187 DEBUG [M:0;10.22.9.171:59396] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:11:10,187 INFO [M:0;10.22.9.171:59396] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59396%2C1471539932179.regiongroup-0:(num 1471540188183) 2016-08-18 10:11:10,187 DEBUG [M:0;10.22.9.171:59396] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:10,187 INFO [M:0;10.22.9.171:59396] regionserver.Leases(146): M:0;10.22.9.171:59396 closing leases 2016-08-18 10:11:10,188 INFO [M:0;10.22.9.171:59396] regionserver.Leases(149): M:0;10.22.9.171:59396 closed leases 2016-08-18 10:11:10,188 INFO [M:0;10.22.9.171:59396] hbase.ChoreService(323): Chore service for: 10.22.9.171,59396,1471539932179 had [[ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59396,1471539932179-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59396,1471539932179-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: 10.22.9.171,59396,1471539932179-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59396,1471539932179-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: CatalogJanitor-10.22.9.171:59396 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,59396,1471539932179-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,59396,1471539932179 Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-18 10:11:10,192 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.9.171,59396,1471539932179 2016-08-18 10:11:10,192 INFO [M:0;10.22.9.171:59396] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-18 10:11:10,193 INFO [M:0;10.22.9.171:59396] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-18 10:11:10,193 INFO [M:0;10.22.9.171:59396] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.9.171,59396,1471539932179, 10.22.9.171,59399,1471539932874 2016-08-18 10:11:10,352 INFO [Socket Reader #1 for port 59481] ipc.Server$Connection(1316): Auth successful for appattempt_1471539956090_0004_000001 (auth:SIMPLE) 2016-08-18 10:11:10,545 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:11:10,545 INFO [RS:0;10.22.9.171:59399] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59399%2C1471539932874.regiongroup-3:(num 1471540188605) 2016-08-18 10:11:10,545 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874 2016-08-18 10:11:10,545 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1090): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-2.1471540188183 2016-08-18 10:11:10,550 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742002_1178{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 1225 2016-08-18 10:11:10,963 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:11:10,963 INFO [RS:0;10.22.9.171:59399] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59399%2C1471539932874.regiongroup-2:(num 1471540188183) 2016-08-18 10:11:10,963 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874 2016-08-18 10:11:10,963 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1090): closing hdfs://localhost:59388/user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/WALs/10.22.9.171,59399,1471539932874/10.22.9.171%2C59399%2C1471539932874.regiongroup-1.1471540189457 2016-08-18 10:11:10,979 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59389 is added to blk_1073742006_1182{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-735609ca-dd6e-4e3a-a792-cc01b8356b1e:NORMAL:127.0.0.1:59389|RBW]]} size 7637 2016-08-18 10:11:11,223 INFO [M:0;10.22.9.171:59396] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.9.171,59396,1471539932179, 10.22.9.171,59399,1471539932874 2016-08-18 10:11:11,391 DEBUG [RS:0;10.22.9.171:59399] wal.FSHLog(1045): Moved 4 WAL file(s) to /user/tyu/test-data/bcf92cc1-a19f-4281-9c61-e117e3540179/oldWALs 2016-08-18 10:11:11,391 INFO [RS:0;10.22.9.171:59399] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C59399%2C1471539932874.regiongroup-1:(num 1471540189457) 2016-08-18 10:11:11,392 DEBUG [RS:0;10.22.9.171:59399] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 10:11:11,392 INFO [RS:0;10.22.9.171:59399] regionserver.Leases(146): RS:0;10.22.9.171:59399 closing leases 2016-08-18 10:11:11,392 INFO [RS:0;10.22.9.171:59399] regionserver.Leases(149): RS:0;10.22.9.171:59399 closed leases 2016-08-18 10:11:11,392 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-364276393) to /10.22.9.171:59396 from tyu.hfs.0: closed 2016-08-18 10:11:11,392 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=59396] ipc.RpcServer$Listener(912): RpcServer.listener,port=59396: DISCONNECTING client 10.22.9.171:59412 because read count=-1. Number of active connections: 6 2016-08-18 10:11:11,392 INFO [RS:0;10.22.9.171:59399] hbase.ChoreService(323): Chore service for: 10.22.9.171,59399,1471539932874 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,59399,1471539932874 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-18 10:11:11,392 INFO [RS:0;10.22.9.171:59399] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 10:11:11,392 INFO [RS:0;10.22.9.171:59399] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 10:11:11,392 INFO [RS:0;10.22.9.171:59399] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 10:11:11,392 INFO [RS:0;10.22.9.171:59399] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 10:11:11,396 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:11:11,396 INFO [RS:0;10.22.9.171:59399] ipc.RpcServer(2336): Stopping server on 59399 2016-08-18 10:11:11,396 INFO [RpcServer.listener,port=59399] ipc.RpcServer$Listener(816): RpcServer.listener,port=59399: stopping 2016-08-18 10:11:11,397 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 10:11:11,397 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 10:11:11,397 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:11:11,397 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,59399,1471539932874 2016-08-18 10:11:11,398 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:59399-0x1569e9d55410001, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-18 10:11:11,398 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,59399,1471539932874] 2016-08-18 10:11:11,399 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.9.171,59399,1471539932874 expired; onlineServers=1 2016-08-18 10:11:11,399 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-18 10:11:11,399 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1135): stopping server 10.22.9.171,59399,1471539932874; zookeeper connection closed. 2016-08-18 10:11:11,399 INFO [RS:0;10.22.9.171:59399] regionserver.HRegionServer(1138): RS:0;10.22.9.171:59399 exiting 2016-08-18 10:11:11,399 INFO [M:0;10.22.9.171:59396] master.ServerManager(562): ZK shows there is only the master self online, exiting now 2016-08-18 10:11:11,399 DEBUG [M:0;10.22.9.171:59396] master.HMaster(1127): Stopping service threads 2016-08-18 10:11:11,399 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@56533706] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@56533706 2016-08-18 10:11:11,399 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-18 10:11:11,400 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/master 2016-08-18 10:11:11,400 INFO [M:0;10.22.9.171:59396] hbase.ChoreService(323): Chore service for: 10.22.9.171,59396,1471539932179_splitLogManager_ had [] on shutdown 2016-08-18 10:11:11,400 INFO [M:0;10.22.9.171:59396] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-18 10:11:11,400 INFO [M:0;10.22.9.171:59396] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-18 10:11:11,400 INFO [M:0;10.22.9.171:59396] ipc.RpcServer(2336): Stopping server on 59396 2016-08-18 10:11:11,400 INFO [RpcServer.listener,port=59396] ipc.RpcServer$Listener(816): RpcServer.listener,port=59396: stopping 2016-08-18 10:11:11,400 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-08-18 10:11:11,401 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 10:11:11,401 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 10:11:11,402 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:59396-0x1569e9d55410000, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,59396,1471539932179 2016-08-18 10:11:11,402 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,59396,1471539932179] 2016-08-18 10:11:11,402 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1135): stopping server 10.22.9.171,59396,1471539932179; zookeeper connection closed. 2016-08-18 10:11:11,402 INFO [M:0;10.22.9.171:59396] regionserver.HRegionServer(1138): M:0;10.22.9.171:59396 exiting 2016-08-18 10:11:11,426 INFO [main] zookeeper.MiniZooKeeperCluster(319): Shutdown MiniZK cluster with all ZK servers 2016-08-18 10:11:11,426 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-18 10:11:11,433 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 10:11:11,506 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3533aa4b-0x1569e9d55410031, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 10:11:11,506 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x3533aa4b-0x1569e9d55410031, quorum=localhost:49480, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 10:11:11,506 DEBUG [10.22.9.171:59437.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x1569e9d5541000a, quorum=localhost:49480, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 10:11:11,506 DEBUG [10.22.9.171:59437.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x1569e9d5541000a, quorum=localhost:49480, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-08-18 10:11:11,506 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4da8940a-0x1569e9d55410010, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 10:11:11,507 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x4da8940a-0x1569e9d55410010, quorum=localhost:49480, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 10:11:11,506 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4a34f6c1-0x1569e9d5541000e, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 10:11:11,507 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=59396-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x4a34f6c1-0x1569e9d5541000e, quorum=localhost:49480, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 10:11:11,506 DEBUG [10.22.9.171:59396.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x1569e9d55410004, quorum=localhost:49480, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 10:11:11,507 DEBUG [10.22.9.171:59396.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x1569e9d55410004, quorum=localhost:49480, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 10:11:11,542 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/dfscluster_2d76f3f4-9dc4-4950-aa90-aebb405cacf6/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/dfscluster_2d76f3f4-9dc4-4950-aa90-aebb405cacf6/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:59388] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1865151160-10.22.9.171-1471539927174 (Datanode Uuid f1a1ce0d-aa7a-4774-bdcb-e77714320637) service to localhost/127.0.0.1:59388 interrupted 2016-08-18 10:11:11,542 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/dfscluster_2d76f3f4-9dc4-4950-aa90-aebb405cacf6/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/d4073ec2-2aa0-40b5-99b2-612bea0c59af/dfscluster_2d76f3f4-9dc4-4950-aa90-aebb405cacf6/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:59388] datanode.BPServiceActor(835): Ending block pool service for: Block pool BP-1865151160-10.22.9.171-1471539927174 (Datanode Uuid f1a1ce0d-aa7a-4774-bdcb-e77714320637) service to localhost/127.0.0.1:59388 2016-08-18 10:11:11,610 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 10:11:11,757 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-18 10:11:11,757 INFO [main] hbase.HBaseTestingUtility(2498): Stopping mini mapreduce cluster... 2016-08-18 10:11:11,760 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 10:11:11,899 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 10:11:25,896 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 10:11:39,916 ERROR [Thread[Thread-636,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-18 10:11:39,917 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 10:11:40,022 WARN [ApplicationMaster Launcher] amlauncher.ApplicationMasterLauncher$LauncherThread(122): org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning. 2016-08-18 10:11:40,026 ERROR [ResourceManager Event Processor] resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor(672): Returning, interrupted : java.lang.InterruptedException 2016-08-18 10:11:40,026 ERROR [Thread[Thread-467,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-18 10:11:40,030 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 10:11:40,135 ERROR [Thread[Thread-447,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-18 10:11:40,135 INFO [main] hbase.HBaseTestingUtility(2501): Mini mapreduce cluster stopped 2016-08-18 10:11:40,143 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@451e7804 2016-08-18 10:11:40,143 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-18 10:11:40,143 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@451e7804 2016-08-18 10:11:40,143 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-18 10:11:40,143 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@451e7804 2016-08-18 10:11:40,144 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-18 10:11:40,144 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@451e7804 2016-08-18 10:11:40,144 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(120): Starting fs shutdown hook thread. 2016-08-18 10:11:40,460 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished.