2018-07-02 07:34:05,562 DEBUG [main] hbase.HBaseTestingUtility(343): Setting hbase.rootdir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e 2018-07-02 07:34:05,581 INFO [main] hbase.HBaseTestingUtility(455): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/hadoop-log-dir so I do NOT create it in target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1 2018-07-02 07:34:05,581 WARN [main] hbase.HBaseTestingUtility(459): hadoop.log.dir property value differs in configuration and system: Configuration=/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/../logs while System=/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/hadoop-log-dir Erasing configuration value by system value. 2018-07-02 07:34:05,581 INFO [main] hbase.HBaseTestingUtility(455): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/hadoop-tmp-dir so I do NOT create it in target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1 2018-07-02 07:34:05,582 WARN [main] hbase.HBaseTestingUtility(459): hadoop.tmp.dir property value differs in configuration and system: Configuration=/tmp/hadoop-jenkins while System=/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/hadoop-tmp-dir Erasing configuration value by system value. 2018-07-02 07:34:05,582 DEBUG [main] hbase.HBaseTestingUtility(343): Setting hbase.rootdir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1 2018-07-02 07:34:05,592 INFO [Time-limited test] hbase.HBaseZKTestingUtility(85): Created new mini-cluster data directory: /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/d3e2824d-ef0d-7291-15c6-0d6b8fb13b8e/cluster_e5ce74bf-e006-09fb-5b50-13c7b1048392, deleteOnExit=true 2018-07-02 07:34:05,715 ERROR [Time-limited test] server.ZooKeeperServer(472): ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2018-07-02 07:34:05,734 INFO [Time-limited test] zookeeper.MiniZooKeeperCluster(281): Started MiniZooKeeperCluster and ran successful 'stat' on client port=59178 2018-07-02 07:34:05,736 INFO [Time-limited test] hbase.HBaseTestingUtility(953): Starting up minicluster with 2 master(s) and 3 regionserver(s) and 3 datanode(s) 2018-07-02 07:34:05,736 INFO [Time-limited test] hbase.HBaseZKTestingUtility(85): Created new mini-cluster data directory: /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe, deleteOnExit=true 2018-07-02 07:34:05,736 INFO [Time-limited test] hbase.HBaseTestingUtility(968): STARTING DFS 2018-07-02 07:34:05,737 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting test.cache.data to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cache_data in system properties and HBase conf 2018-07-02 07:34:05,737 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting hadoop.tmp.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/hadoop_tmp in system properties and HBase conf 2018-07-02 07:34:05,738 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting hadoop.log.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/hadoop_logs in system properties and HBase conf 2018-07-02 07:34:05,738 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/mapred_local in system properties and HBase conf 2018-07-02 07:34:05,740 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/mapred_temp in system properties and HBase conf 2018-07-02 07:34:05,741 INFO [Time-limited test] hbase.HBaseTestingUtility(736): read short circuit is OFF 2018-07-02 07:34:05,864 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-07-02 07:34:06,295 DEBUG [Time-limited test] fs.HFileSystem(317): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2018-07-02 07:34:07,830 WARN [Time-limited test] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2018-07-02 07:34:08,000 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2018-07-02 07:34:08,063 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:08,094 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs to /tmp/Jetty_localhost_37040_hdfs____.joi00e/webapp 2018-07-02 07:34:08,281 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37040 2018-07-02 07:34:09,312 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:09,319 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_56732_datanode____.hgolc4/webapp 2018-07-02 07:34:09,451 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:56732 2018-07-02 07:34:09,856 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:09,866 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_50317_datanode____.31h5vf/webapp 2018-07-02 07:34:10,212 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:50317 2018-07-02 07:34:10,280 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:10,287 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_46607_datanode____72pobm/webapp 2018-07-02 07:34:10,438 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46607 2018-07-02 07:34:11,156 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:38505] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-07-02 07:34:11,156 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:38505] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-07-02 07:34:11,158 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:38505] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-07-02 07:34:11,244 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c15594d776: from storage DS-137fa992-0531-460e-8da1-5d0327e9db5c node DatanodeRegistration(127.0.0.1:33954, datanodeUuid=b63c29a3-a7bf-4a69-a4ea-7f6519264b08, infoPort=41709, infoSecurePort=0, ipcPort=54735, storageInfo=lv=-56;cid=testClusterID;nsid=2071817066;c=0), blocks: 0, hasStaleStorage: true, processing time: 3 msecs 2018-07-02 07:34:11,246 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c1559517c6: from storage DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8 node DatanodeRegistration(127.0.0.1:45556, datanodeUuid=deb1bee5-f7f2-4770-8e01-3677f7c2c853, infoPort=56361, infoSecurePort=0, ipcPort=54131, storageInfo=lv=-56;cid=testClusterID;nsid=2071817066;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2018-07-02 07:34:11,246 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c15594d420: from storage DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee node DatanodeRegistration(127.0.0.1:48785, datanodeUuid=e396df7c-0162-458b-b04d-8b308b84c161, infoPort=36030, infoSecurePort=0, ipcPort=59413, storageInfo=lv=-56;cid=testClusterID;nsid=2071817066;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2018-07-02 07:34:11,247 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c15594d776: from storage DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9 node DatanodeRegistration(127.0.0.1:33954, datanodeUuid=b63c29a3-a7bf-4a69-a4ea-7f6519264b08, infoPort=41709, infoSecurePort=0, ipcPort=54735, storageInfo=lv=-56;cid=testClusterID;nsid=2071817066;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-07-02 07:34:11,247 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c1559517c6: from storage DS-fb979981-ad7d-4df7-af08-69017228b672 node DatanodeRegistration(127.0.0.1:45556, datanodeUuid=deb1bee5-f7f2-4770-8e01-3677f7c2c853, infoPort=56361, infoSecurePort=0, ipcPort=54131, storageInfo=lv=-56;cid=testClusterID;nsid=2071817066;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-07-02 07:34:11,247 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c15594d420: from storage DS-56d6abd0-3a09-4c43-b351-0b985710fa52 node DatanodeRegistration(127.0.0.1:48785, datanodeUuid=e396df7c-0162-458b-b04d-8b308b84c161, infoPort=36030, infoSecurePort=0, ipcPort=59413, storageInfo=lv=-56;cid=testClusterID;nsid=2071817066;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-07-02 07:34:11,329 DEBUG [Time-limited test] hbase.HBaseTestingUtility(671): Setting hbase.rootdir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e 2018-07-02 07:34:11,341 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:11,344 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:11,748 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW]]} size 7 2018-07-02 07:34:11,748 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741825_1001 size 7 2018-07-02 07:34:11,749 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741825_1001 size 7 2018-07-02 07:34:12,161 INFO [Time-limited test] util.FSUtils(515): Created version file at hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370 with version=8 2018-07-02 07:34:12,161 INFO [Time-limited test] hbase.HBaseTestingUtility(1212): Setting hbase.fs.tmp.dir to hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/hbase-staging 2018-07-02 07:34:12,407 INFO [Time-limited test] metrics.MetricRegistriesLoader(66): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2018-07-02 07:34:12,699 INFO [Time-limited test] client.ConnectionUtils(122): master/asf911:0 server-side Connection retries=45 2018-07-02 07:34:12,719 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:12,721 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:12,721 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:12,850 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:13,079 DEBUG [Time-limited test] util.ClassSize(229): Using Unsafe to estimate memory layout 2018-07-02 07:34:13,178 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:39498 2018-07-02 07:34:13,193 INFO [Time-limited test] hfile.CacheConfig(553): Allocating onheap LruBlockCache size=995.60 MB, blockSize=64 KB 2018-07-02 07:34:13,201 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,202 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,205 DEBUG [Time-limited test] mob.MobFileCache(123): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2018-07-02 07:34:13,207 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,210 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,464 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=master:39498 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:13,543 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:394980x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:13,544 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): master:39498-0x16459e9b4500000 connected 2018-07-02 07:34:13,643 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/master 2018-07-02 07:34:13,644 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:13,654 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39498 2018-07-02 07:34:13,656 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=39498 2018-07-02 07:34:13,656 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39498 2018-07-02 07:34:13,663 INFO [Time-limited test] master.HMaster(495): hbase.rootdir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370, hbase.cluster.distributed=false 2018-07-02 07:34:13,699 INFO [Time-limited test] client.ConnectionUtils(122): master/asf911:0 server-side Connection retries=45 2018-07-02 07:34:13,700 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:13,700 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:13,700 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:13,700 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:13,702 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:51263 2018-07-02 07:34:13,704 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,704 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,706 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,709 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,711 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=master:51263 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:13,724 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:512630x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:13,725 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): master:51263-0x16459e9b4500001 connected 2018-07-02 07:34:13,751 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/master 2018-07-02 07:34:13,753 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:13,754 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=51263 2018-07-02 07:34:13,755 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=51263 2018-07-02 07:34:13,755 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=51263 2018-07-02 07:34:13,756 INFO [Time-limited test] master.HMaster(495): hbase.rootdir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370, hbase.cluster.distributed=false 2018-07-02 07:34:13,826 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:34:13,827 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:13,827 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:13,827 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:13,831 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:13,832 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:34:13,836 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:46264 2018-07-02 07:34:13,837 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,838 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,841 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,844 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,848 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:46264 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:13,858 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:462640x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:13,859 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:46264-0x16459e9b4500002 connected 2018-07-02 07:34:13,859 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/master 2018-07-02 07:34:13,860 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:13,861 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46264 2018-07-02 07:34:13,862 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=46264 2018-07-02 07:34:13,863 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46264 2018-07-02 07:34:13,893 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:34:13,894 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:13,894 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:13,894 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:13,895 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:13,895 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:34:13,900 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:42768 2018-07-02 07:34:13,901 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,902 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,905 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,910 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,914 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:42768 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:13,924 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:427680x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:13,925 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:42768-0x16459e9b4500003 connected 2018-07-02 07:34:13,926 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/master 2018-07-02 07:34:13,927 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:13,928 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42768 2018-07-02 07:34:13,930 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=42768 2018-07-02 07:34:13,931 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42768 2018-07-02 07:34:13,961 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:34:13,962 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:13,962 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:13,962 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:13,962 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:13,962 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:34:13,964 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:38972 2018-07-02 07:34:13,965 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,968 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:13,970 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,974 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:13,976 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:38972 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:13,984 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:389720x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:13,985 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:38972-0x16459e9b4500004 connected 2018-07-02 07:34:13,986 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/master 2018-07-02 07:34:13,986 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:13,988 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38972 2018-07-02 07:34:13,989 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=38972 2018-07-02 07:34:13,990 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38972 2018-07-02 07:34:13,999 INFO [Thread-157] master.HMaster(2108): Adding backup master ZNode /cluster1/backup-masters/asf911.gq1.ygridcore.net,39498,1530516852236 2018-07-02 07:34:13,999 INFO [Thread-158] master.HMaster(2108): Adding backup master ZNode /cluster1/backup-masters/asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:14,017 DEBUG [Thread-158] zookeeper.ZKUtil(355): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/backup-masters/asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:14,017 DEBUG [Thread-157] zookeeper.ZKUtil(355): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/backup-masters/asf911.gq1.ygridcore.net,39498,1530516852236 2018-07-02 07:34:14,067 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/master 2018-07-02 07:34:14,067 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/master 2018-07-02 07:34:14,068 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/master 2018-07-02 07:34:14,067 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/master 2018-07-02 07:34:14,067 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/master 2018-07-02 07:34:14,071 DEBUG [Thread-157] zookeeper.ZKUtil(355): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/master 2018-07-02 07:34:14,071 DEBUG [Thread-158] zookeeper.ZKUtil(355): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/master 2018-07-02 07:34:14,072 INFO [Thread-158] master.ActiveMasterManager(172): Deleting ZNode for /cluster1/backup-masters/asf911.gq1.ygridcore.net,51263,1530516853697 from backup master directory 2018-07-02 07:34:14,075 INFO [Thread-157] master.ActiveMasterManager(218): Another master is the active master, asf911.gq1.ygridcore.net,51263,1530516853697; waiting to become the next active master 2018-07-02 07:34:14,083 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/master 2018-07-02 07:34:14,084 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/backup-masters/asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:14,084 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/master 2018-07-02 07:34:14,084 WARN [Thread-158] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:14,085 INFO [Thread-158] master.ActiveMasterManager(181): Registered as active master=asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:14,089 INFO [Thread-158] regionserver.ChunkCreator(498): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 448, initial count 0 2018-07-02 07:34:14,091 INFO [Thread-158] regionserver.ChunkCreator(498): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 497, initial count 0 2018-07-02 07:34:14,221 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:14,222 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|FINALIZED], ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:14,224 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|FINALIZED], ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:14,228 DEBUG [Thread-158] util.FSUtils(667): Created cluster ID file at hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/hbase.id with ID: 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:14,269 INFO [Thread-158] master.MasterFileSystem(393): BOOTSTRAP: creating hbase:meta region 2018-07-02 07:34:14,274 INFO [Thread-158] regionserver.HRegion(6931): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'false', BLOCKSIZE => '8192'}, {NAME => 'rep_barrier', VERSIONS => '2147483647', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'table', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370 Table name == hbase:meta 2018-07-02 07:34:14,316 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW]]} size 0 2018-07-02 07:34:14,320 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|FINALIZED]]} size 0 2018-07-02 07:34:14,321 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|FINALIZED], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:14,328 DEBUG [Thread-158] regionserver.HRegion(829): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:14,383 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/info 2018-07-02 07:34:14,404 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:14,417 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:14,435 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:14,440 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:34:14,442 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:14,442 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:14,444 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:14,450 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/table 2018-07-02 07:34:14,451 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:14,452 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:14,456 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:14,457 DEBUG [Thread-158] regionserver.HRegion(925): replaying wal for 1588230740 2018-07-02 07:34:14,466 DEBUG [Thread-158] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740 2018-07-02 07:34:14,467 DEBUG [Thread-158] regionserver.HRegion(933): stopping wal replay for 1588230740 2018-07-02 07:34:14,467 DEBUG [Thread-158] regionserver.HRegion(945): Cleaning up temporary data for 1588230740 2018-07-02 07:34:14,479 DEBUG [Thread-158] regionserver.HRegion(956): Cleaning up detritus for 1588230740 2018-07-02 07:34:14,485 DEBUG [Thread-158] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-07-02 07:34:14,487 DEBUG [Thread-158] regionserver.HRegion(978): writing seq id for 1588230740 2018-07-02 07:34:14,494 DEBUG [Thread-158] wal.WALSplitter(678): Wrote file=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-07-02 07:34:14,495 INFO [Thread-158] regionserver.HRegion(982): Opened 1588230740; next sequenceid=2 2018-07-02 07:34:14,495 DEBUG [Thread-158] regionserver.HRegion(1527): Closing 1588230740, disabling compactions & flushes 2018-07-02 07:34:14,495 DEBUG [Thread-158] regionserver.HRegion(1567): Updates disabled for region hbase:meta,,1.1588230740 2018-07-02 07:34:14,497 INFO [Thread-158] regionserver.HRegion(1681): Closed hbase:meta,,1.1588230740 2018-07-02 07:34:14,538 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:14,539 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:14,539 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:14,545 DEBUG [Thread-158] util.FSTableDescriptors(683): Wrote into hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2018-07-02 07:34:14,586 INFO [Thread-158] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:14,599 INFO [Thread-158] coordination.ZKSplitLogManagerCoordination(494): Found 0 orphan tasks and 0 rescan nodes 2018-07-02 07:34:14,638 INFO [Thread-158] zookeeper.ReadOnlyZKClient(139): Connect 0x3128d54d to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:14,678 DEBUG [Thread-158] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20f73e60, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:14,713 INFO [Thread-158] procedure2.ProcedureExecutor(528): Starting 16 core workers (bigger of cpus/4 or 16) with max (burst) worker count=160 2018-07-02 07:34:14,724 WARN [Thread-158] util.CommonFSUtils$StreamCapabilities(830): Your Hadoop installation does not include the StreamCapabilities class from HDFS-11644, so we will skip checking if any FSDataOutputStreams actually support hflush/hsync. If you are running on top of HDFS this probably just means you have an older version and this can be ignored. If you are running on top of an alternate FileSystem implementation you should manually verify that hflush and hsync are implemented; otherwise you risk data loss and hard to diagnose errors when our assumptions are violated. 2018-07-02 07:34:14,725 DEBUG [Thread-158] util.CommonFSUtils$StreamCapabilities(837): The first request to check for StreamCapabilities came from this stacktrace. java.lang.ClassNotFoundException: org.apache.hadoop.fs.StreamCapabilities at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.util.CommonFSUtils$StreamCapabilities.(CommonFSUtils.java:826) at org.apache.hadoop.hbase.util.CommonFSUtils.hasCapability(CommonFSUtils.java:864) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1042) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:382) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:545) at org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1343) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:878) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2128) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:572) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:34:14,730 INFO [Thread-158] wal.WALProcedureStore(1077): Rolled new Procedure Store WAL, id=1 2018-07-02 07:34:14,731 INFO [Thread-158] procedure2.ProcedureExecutor(547): Recovered WALProcedureStore lease in 16msec 2018-07-02 07:34:14,732 INFO [Thread-158] procedure2.ProcedureExecutor(561): Loaded WALProcedureStore in 0msec 2018-07-02 07:34:14,733 INFO [Thread-158] procedure2.RemoteProcedureDispatcher(97): Instantiated, coreThreads=128 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2018-07-02 07:34:14,761 DEBUG [Thread-158] zookeeper.ZKUtil(614): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Unable to get data of znode /cluster1/meta-region-server because node does not exist (not an error) 2018-07-02 07:34:14,800 INFO [Thread-158] balancer.BaseLoadBalancer(1039): slop=0.001, tablesOnMaster=false, systemTablesOnMaster=false 2018-07-02 07:34:14,807 INFO [Thread-158] balancer.StochasticLoadBalancer(216): Loaded config; maxSteps=1000000, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, etc. 2018-07-02 07:34:14,828 DEBUG [Thread-158] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/balancer 2018-07-02 07:34:14,830 DEBUG [Thread-158] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/normalizer 2018-07-02 07:34:14,842 DEBUG [Thread-158] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/switch/split 2018-07-02 07:34:14,842 DEBUG [Thread-158] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/switch/merge 2018-07-02 07:34:14,874 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:14,875 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:14,875 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:14,875 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:14,874 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:14,877 INFO [Thread-158] master.HMaster(787): Active/primary master=asf911.gq1.ygridcore.net,51263,1530516853697, sessionid=0x16459e9b4500001, setting cluster-up flag (Was=false) 2018-07-02 07:34:14,878 INFO [M:0;asf911:39498] regionserver.HRegionServer(874): ClusterId : 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:14,941 DEBUG [Thread-158] procedure.ZKProcedureUtil(272): Clearing all znodes /cluster1/flush-table-proc/acquired, /cluster1/flush-table-proc/reached, /cluster1/flush-table-proc/abort 2018-07-02 07:34:14,943 DEBUG [Thread-158] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:14,991 DEBUG [Thread-158] procedure.ZKProcedureUtil(272): Clearing all znodes /cluster1/online-snapshot/acquired, /cluster1/online-snapshot/reached, /cluster1/online-snapshot/abort 2018-07-02 07:34:14,993 DEBUG [Thread-158] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:14,999 INFO [Thread-158] master.ServerManager(1104): No .lastflushedseqids found athdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/.lastflushedseqids will record last flushed sequence id for regions by regionserver report all over again 2018-07-02 07:34:15,097 INFO [RS:1;asf911:42768] regionserver.HRegionServer(874): ClusterId : 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:15,098 INFO [RS:2;asf911:38972] regionserver.HRegionServer(874): ClusterId : 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:15,098 INFO [RS:0;asf911:46264] regionserver.HRegionServer(874): ClusterId : 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:15,105 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:34:15,105 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:34:15,105 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:34:15,126 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:34:15,126 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:34:15,127 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:34:15,128 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:34:15,128 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:34:15,128 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:34:15,142 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:34:15,143 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:34:15,145 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:34:15,145 INFO [RS:0;asf911:46264] zookeeper.ReadOnlyZKClient(139): Connect 0x4c7d5474 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:15,146 INFO [RS:2;asf911:38972] zookeeper.ReadOnlyZKClient(139): Connect 0x5c5c6f91 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:15,147 INFO [RS:1;asf911:42768] zookeeper.ReadOnlyZKClient(139): Connect 0x4fdbce1e to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:15,175 DEBUG [RS:0;asf911:46264] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42baa38b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:15,175 DEBUG [RS:1;asf911:42768] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@411c2995, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:15,176 DEBUG [RS:0;asf911:46264] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46853f04, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:15,176 DEBUG [RS:2;asf911:38972] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@529b9a6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:15,177 DEBUG [RS:1;asf911:42768] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2123ae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:15,178 DEBUG [RS:2;asf911:38972] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13a69051, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:15,181 DEBUG [RS:2;asf911:38972] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:2;asf911:38972 2018-07-02 07:34:15,181 DEBUG [RS:0;asf911:46264] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:0;asf911:46264 2018-07-02 07:34:15,181 DEBUG [RS:1;asf911:42768] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:1;asf911:42768 2018-07-02 07:34:15,187 INFO [RS:1;asf911:42768] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:34:15,187 INFO [RS:2;asf911:38972] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:34:15,188 INFO [RS:2;asf911:38972] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:34:15,188 INFO [RS:0;asf911:46264] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:34:15,189 INFO [RS:0;asf911:46264] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:34:15,188 INFO [RS:1;asf911:42768] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:34:15,192 INFO [RS:1;asf911:42768] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,51263,1530516853697 with port=42768, startcode=1530516853889 2018-07-02 07:34:15,193 INFO [RS:2;asf911:38972] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,51263,1530516853697 with port=38972, startcode=1530516853959 2018-07-02 07:34:15,193 INFO [RS:0;asf911:46264] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,51263,1530516853697 with port=46264, startcode=1530516853823 2018-07-02 07:34:15,367 DEBUG [Thread-158] procedure2.ProcedureExecutor(887): Stored pid=1, state=RUNNABLE:INIT_META_ASSIGN_META; InitMetaProcedure table=hbase:meta 2018-07-02 07:34:15,377 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:41725, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:34:15,377 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:41668, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:34:15,377 INFO [RS-EventLoopGroup-3-4] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:39889, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:34:15,390 DEBUG [Thread-158] executor.ExecutorService(92): Starting executor service name=MASTER_OPEN_REGION-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:15,390 DEBUG [Thread-158] executor.ExecutorService(92): Starting executor service name=MASTER_CLOSE_REGION-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:15,391 DEBUG [Thread-158] executor.ExecutorService(92): Starting executor service name=MASTER_SERVER_OPERATIONS-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:15,391 DEBUG [Thread-158] executor.ExecutorService(92): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:15,391 DEBUG [Thread-158] executor.ExecutorService(92): Starting executor service name=M_LOG_REPLAY_OPS-master/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:15,391 DEBUG [Thread-158] executor.ExecutorService(92): Starting executor service name=MASTER_TABLE_OPERATIONS-master/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:15,397 INFO [Thread-158] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.ProcedureExecutor$CompletedProcedureCleaner; timeout=30000, timestamp=1530516885397 2018-07-02 07:34:15,399 INFO [Thread-158] cleaner.CleanerChore$DirScanPool(90): Cleaner pool size is 4 2018-07-02 07:34:15,401 DEBUG [Thread-158] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2018-07-02 07:34:15,401 INFO [Thread-158] zookeeper.RecoverableZooKeeper(106): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:15,402 DEBUG [Thread-158] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2018-07-02 07:34:15,404 DEBUG [Thread-158] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2018-07-02 07:34:15,404 INFO [PEWorker-2] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740}] 2018-07-02 07:34:15,404 INFO [Thread-158] cleaner.LogCleaner(122): Creating OldWALs cleaners with size=2 2018-07-02 07:34:15,412 DEBUG [Thread-158] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2018-07-02 07:34:15,415 DEBUG [Thread-158] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2018-07-02 07:34:15,416 DEBUG [Thread-158] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2018-07-02 07:34:15,417 DEBUG [Thread-158-EventThread] zookeeper.ZKWatcher(478): replicationLogCleaner0x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:15,418 DEBUG [Thread-158-EventThread] zookeeper.ZKWatcher(543): replicationLogCleaner-0x16459e9b4500009 connected 2018-07-02 07:34:15,419 DEBUG [Thread-158] cleaner.HFileCleaner(207): Starting for large file=Thread[Thread-158-HFileCleaner.large.0-1530516855419,5,FailOnTimeoutGroup] 2018-07-02 07:34:15,419 DEBUG [Thread-158] cleaner.HFileCleaner(222): Starting for small files=Thread[Thread-158-HFileCleaner.small.0-1530516855419,5,FailOnTimeoutGroup] 2018-07-02 07:34:15,436 DEBUG [RS:2;asf911:38972] regionserver.HRegionServer(2625): Master is not running yet 2018-07-02 07:34:15,436 DEBUG [RS:1;asf911:42768] regionserver.HRegionServer(2625): Master is not running yet 2018-07-02 07:34:15,436 WARN [RS:1;asf911:42768] regionserver.HRegionServer(950): reportForDuty failed; sleeping and then retrying. 2018-07-02 07:34:15,436 DEBUG [RS:0;asf911:46264] regionserver.HRegionServer(2625): Master is not running yet 2018-07-02 07:34:15,436 WARN [RS:0;asf911:46264] regionserver.HRegionServer(950): reportForDuty failed; sleeping and then retrying. 2018-07-02 07:34:15,436 WARN [RS:2;asf911:38972] regionserver.HRegionServer(950): reportForDuty failed; sleeping and then retrying. 2018-07-02 07:34:15,513 INFO [PEWorker-11] procedure.MasterProcedureScheduler(697): pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 checking lock on 1588230740 2018-07-02 07:34:15,524 INFO [PEWorker-11] assignment.AssignProcedure(218): Starting pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; forceNewPlan=false, retain=false 2018-07-02 07:34:15,675 WARN [master/asf911:0] assignment.AssignmentManager(1669): No servers available; cannot place 1 unassigned regions. 2018-07-02 07:34:16,438 INFO [RS:2;asf911:38972] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,51263,1530516853697 with port=38972, startcode=1530516853959 2018-07-02 07:34:16,438 INFO [RS:1;asf911:42768] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,51263,1530516853697 with port=42768, startcode=1530516853889 2018-07-02 07:34:16,438 INFO [RS:0;asf911:46264] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,51263,1530516853697 with port=46264, startcode=1530516853823 2018-07-02 07:34:16,446 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,447 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=51263] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,456 DEBUG [RS:2;asf911:38972] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370 2018-07-02 07:34:16,456 DEBUG [RS:1;asf911:42768] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370 2018-07-02 07:34:16,456 DEBUG [RS:0;asf911:46264] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370 2018-07-02 07:34:16,457 DEBUG [RS:1;asf911:42768] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:38505 2018-07-02 07:34:16,458 DEBUG [RS:1;asf911:42768] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:34:16,457 DEBUG [RS:2;asf911:38972] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:38505 2018-07-02 07:34:16,458 DEBUG [RS:2;asf911:38972] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:34:16,457 DEBUG [RS:0;asf911:46264] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:38505 2018-07-02 07:34:16,459 DEBUG [RS:0;asf911:46264] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:34:16,493 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/rs 2018-07-02 07:34:16,508 DEBUG [RS:2;asf911:38972] zookeeper.ZKUtil(355): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,508 DEBUG [RS:0;asf911:46264] zookeeper.ZKUtil(355): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,508 WARN [RS:2;asf911:38972] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:16,508 WARN [RS:0;asf911:46264] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:16,510 DEBUG [RS:1;asf911:42768] zookeeper.ZKUtil(355): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,510 WARN [RS:1;asf911:42768] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:16,510 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,42768,1530516853889] 2018-07-02 07:34:16,510 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,46264,1530516853823] 2018-07-02 07:34:16,510 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,38972,1530516853959] 2018-07-02 07:34:16,519 INFO [RS:2;asf911:38972] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:16,519 INFO [RS:1;asf911:42768] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:16,519 INFO [RS:0;asf911:46264] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:16,532 DEBUG [RS:1;asf911:42768] regionserver.HRegionServer(1815): logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,532 DEBUG [RS:0;asf911:46264] regionserver.HRegionServer(1815): logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,532 DEBUG [RS:2;asf911:38972] regionserver.HRegionServer(1815): logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,560 DEBUG [RS:0;asf911:46264] zookeeper.ZKUtil(355): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,560 DEBUG [RS:2;asf911:38972] zookeeper.ZKUtil(355): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,560 DEBUG [RS:1;asf911:42768] zookeeper.ZKUtil(355): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,561 DEBUG [RS:0;asf911:46264] zookeeper.ZKUtil(355): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,561 DEBUG [RS:1;asf911:42768] zookeeper.ZKUtil(355): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,561 DEBUG [RS:2;asf911:38972] zookeeper.ZKUtil(355): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,561 DEBUG [RS:0;asf911:46264] zookeeper.ZKUtil(355): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,562 DEBUG [RS:1;asf911:42768] zookeeper.ZKUtil(355): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,562 DEBUG [RS:2;asf911:38972] zookeeper.ZKUtil(355): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on existing znode=/cluster1/rs/asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,568 DEBUG [RS:0;asf911:46264] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:34:16,568 DEBUG [RS:1;asf911:42768] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:34:16,568 DEBUG [RS:2;asf911:38972] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:34:16,581 INFO [RS:2;asf911:38972] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:34:16,581 INFO [RS:1;asf911:42768] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:34:16,581 INFO [RS:0;asf911:46264] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:34:16,625 INFO [RS:1;asf911:42768] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:34:16,625 INFO [RS:2;asf911:38972] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:34:16,625 INFO [RS:0;asf911:46264] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:34:16,633 INFO [RS:0;asf911:46264] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:34:16,641 INFO [RS:2;asf911:38972] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:34:16,633 INFO [RS:1;asf911:42768] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:34:16,645 INFO [RS:0;asf911:46264] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:34:16,645 INFO [RS:1;asf911:42768] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:34:16,645 INFO [RS:2;asf911:38972] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:34:16,658 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,658 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,659 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,659 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,658 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,659 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,659 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,660 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,660 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,660 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,660 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,660 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,660 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,660 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:16,661 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,661 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,661 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:16,661 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:16,661 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:16,662 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,661 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:16,662 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:16,662 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,662 DEBUG [RS:2;asf911:38972] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,662 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:16,662 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:16,663 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:16,663 DEBUG [RS:1;asf911:42768] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,663 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:16,664 DEBUG [RS:0;asf911:46264] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:16,680 DEBUG [master/asf911:0] assignment.AssignmentManager(1690): Processing assignQueue; systemServersCount=3, allServersCount=3 2018-07-02 07:34:16,700 INFO [PEWorker-12] assignment.AssignProcedure(246): Early suspend! pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,705 INFO [SplitLogWorker-asf911:38972] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,38972,1530516853959 starting 2018-07-02 07:34:16,705 INFO [SplitLogWorker-asf911:46264] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,46264,1530516853823 starting 2018-07-02 07:34:16,707 INFO [SplitLogWorker-asf911:42768] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,42768,1530516853889 starting 2018-07-02 07:34:16,711 INFO [RS:1;asf911:42768] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:34:16,711 INFO [RS:2;asf911:38972] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:34:16,711 INFO [RS:0;asf911:46264] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:34:16,756 INFO [RS:0;asf911:46264] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,46264,1530516853823, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:46264, sessionid=0x16459e9b4500002 2018-07-02 07:34:16,756 INFO [RS:1;asf911:42768] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,42768,1530516853889, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:42768, sessionid=0x16459e9b4500003 2018-07-02 07:34:16,756 INFO [RS:2;asf911:38972] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,38972,1530516853959, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:38972, sessionid=0x16459e9b4500004 2018-07-02 07:34:16,756 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:34:16,756 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:34:16,756 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:34:16,758 DEBUG [RS:1;asf911:42768] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,757 DEBUG [RS:0;asf911:46264] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,757 DEBUG [RS:2;asf911:38972] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,758 DEBUG [RS:1;asf911:42768] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,42768,1530516853889' 2018-07-02 07:34:16,761 DEBUG [RS:1;asf911:42768] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster1/flush-table-proc/abort' 2018-07-02 07:34:16,758 DEBUG [RS:0;asf911:46264] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,46264,1530516853823' 2018-07-02 07:34:16,758 DEBUG [RS:2;asf911:38972] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,38972,1530516853959' 2018-07-02 07:34:16,761 DEBUG [RS:2;asf911:38972] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster1/flush-table-proc/abort' 2018-07-02 07:34:16,761 DEBUG [RS:0;asf911:46264] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster1/flush-table-proc/abort' 2018-07-02 07:34:16,762 DEBUG [RS:1;asf911:42768] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster1/flush-table-proc/acquired' 2018-07-02 07:34:16,762 DEBUG [RS:2;asf911:38972] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster1/flush-table-proc/acquired' 2018-07-02 07:34:16,762 DEBUG [RS:0;asf911:46264] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster1/flush-table-proc/acquired' 2018-07-02 07:34:16,762 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:34:16,762 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:34:16,763 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:34:16,763 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:34:16,763 DEBUG [RS:1;asf911:42768] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:16,763 DEBUG [RS:2;asf911:38972] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:16,763 DEBUG [RS:2;asf911:38972] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,38972,1530516853959' 2018-07-02 07:34:16,763 DEBUG [RS:2;asf911:38972] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster1/online-snapshot/abort' 2018-07-02 07:34:16,763 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:34:16,763 DEBUG [RS:1;asf911:42768] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,42768,1530516853889' 2018-07-02 07:34:16,763 DEBUG [RS:1;asf911:42768] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster1/online-snapshot/abort' 2018-07-02 07:34:16,763 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:34:16,764 DEBUG [RS:0;asf911:46264] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:16,764 DEBUG [RS:2;asf911:38972] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster1/online-snapshot/acquired' 2018-07-02 07:34:16,764 DEBUG [RS:0;asf911:46264] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,46264,1530516853823' 2018-07-02 07:34:16,764 DEBUG [RS:0;asf911:46264] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster1/online-snapshot/abort' 2018-07-02 07:34:16,764 DEBUG [RS:2;asf911:38972] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:34:16,765 DEBUG [RS:1;asf911:42768] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster1/online-snapshot/acquired' 2018-07-02 07:34:16,764 DEBUG [RS:0;asf911:46264] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster1/online-snapshot/acquired' 2018-07-02 07:34:16,765 INFO [RS:2;asf911:38972] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:34:16,765 INFO [RS:2;asf911:38972] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:34:16,765 DEBUG [RS:1;asf911:42768] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:34:16,765 DEBUG [RS:0;asf911:46264] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:34:16,765 INFO [RS:1;asf911:42768] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:34:16,765 INFO [RS:0;asf911:46264] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:34:16,766 INFO [RS:0;asf911:46264] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:34:16,765 INFO [RS:1;asf911:42768] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:34:17,813 WARN [RS:1;asf911:42768] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:17,813 INFO [RS:1;asf911:42768] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C42768%2C1530516853889, suffix=, logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,42768,1530516853889, archiveDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:17,817 WARN [RS:2;asf911:38972] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:17,817 INFO [RS:2;asf911:38972] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C38972%2C1530516853959, suffix=, logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959, archiveDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:17,821 WARN [RS:0;asf911:46264] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:17,821 INFO [RS:0;asf911:46264] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C46264%2C1530516853823, suffix=, logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,46264,1530516853823, archiveDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:17,853 DEBUG [RS:2;asf911:38972] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(421): org.apache.hadoop.hdfs.protocolPB.PBHelperClient not found (Hadoop is pre-2.8.0?); using class org.apache.hadoop.hdfs.protocolPB.PBHelper instead. 2018-07-02 07:34:17,906 DEBUG [RS:2;asf911:38972] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(494): No DfsClientConf class found, should be hadoop 2.7- java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.client.impl.DfsClientConf at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createChecksumCreater(FanOutOneBlockAsyncDFSOutputHelper.java:492) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:556) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:167) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:165) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:102) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:660) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:666) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:769) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:500) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:441) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:142) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:58) at org.apache.hadoop.hbase.wal.SyncReplicationWALProvider.getWAL(SyncReplicationWALProvider.java:195) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:262) at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2115) at org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1325) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1191) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1007) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:183) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:129) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:164) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:34:17,907 DEBUG [RS:2;asf911:38972] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(528): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2018-07-02 07:34:17,944 DEBUG [RS-EventLoopGroup-6-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(199): No PBHelperClient class found, should be hadoop 2.7- java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.protocolPB.PBHelperClient at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createPBHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:197) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:261) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:34:17,949 DEBUG [RS-EventLoopGroup-6-11] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:48785,DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee,DISK] 2018-07-02 07:34:17,949 DEBUG [RS-EventLoopGroup-6-12] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK] 2018-07-02 07:34:17,949 DEBUG [RS-EventLoopGroup-6-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33954,DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9,DISK] 2018-07-02 07:34:17,950 DEBUG [RS-EventLoopGroup-6-6] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK] 2018-07-02 07:34:17,950 DEBUG [RS-EventLoopGroup-6-7] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33954,DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9,DISK] 2018-07-02 07:34:17,950 DEBUG [RS-EventLoopGroup-6-8] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33954,DS-137fa992-0531-460e-8da1-5d0327e9db5c,DISK] 2018-07-02 07:34:17,950 DEBUG [RS-EventLoopGroup-6-13] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK] 2018-07-02 07:34:17,950 DEBUG [RS-EventLoopGroup-6-10] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:48785,DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee,DISK] 2018-07-02 07:34:17,950 DEBUG [RS-EventLoopGroup-6-9] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:48785,DS-56d6abd0-3a09-4c43-b351-0b985710fa52,DISK] 2018-07-02 07:34:18,016 INFO [RS:0;asf911:46264] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,46264,1530516853823/asf911.gq1.ygridcore.net%2C46264%2C1530516853823.1530516857838 2018-07-02 07:34:18,016 INFO [RS:1;asf911:42768] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,42768,1530516853889/asf911.gq1.ygridcore.net%2C42768%2C1530516853889.1530516857838 2018-07-02 07:34:18,016 INFO [RS:2;asf911:38972] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959.1530516857838 2018-07-02 07:34:18,017 DEBUG [RS:0;asf911:46264] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33954,DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9,DISK], DatanodeInfoWithStorage[127.0.0.1:48785,DS-56d6abd0-3a09-4c43-b351-0b985710fa52,DISK], DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK]] 2018-07-02 07:34:18,017 DEBUG [RS:1;asf911:42768] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33954,DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9,DISK], DatanodeInfoWithStorage[127.0.0.1:48785,DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee,DISK], DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK]] 2018-07-02 07:34:18,017 DEBUG [RS:2;asf911:38972] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK], DatanodeInfoWithStorage[127.0.0.1:33954,DS-137fa992-0531-460e-8da1-5d0327e9db5c,DISK], DatanodeInfoWithStorage[127.0.0.1:48785,DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee,DISK]] 2018-07-02 07:34:18,046 INFO [PEWorker-13] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:18,066 DEBUG [PEWorker-13] zookeeper.MetaTableLocator(466): META region location doesn't exist, create it 2018-07-02 07:34:18,075 INFO [PEWorker-13] assignment.RegionTransitionProcedure(241): Dispatch pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:18,288 DEBUG [RSProcedureDispatcher-pool3-t1] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:18,299 INFO [RS-EventLoopGroup-6-24] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:55247, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:34:18,307 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=38972] regionserver.RSRpcServices(1983): Open hbase:meta,,1.1588230740 2018-07-02 07:34:18,311 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:18,319 WARN [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:18,320 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C38972%2C1530516853959.meta, suffix=.meta, logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959, archiveDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:18,333 DEBUG [RS-EventLoopGroup-6-25] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK] 2018-07-02 07:34:18,338 DEBUG [RS-EventLoopGroup-6-26] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33954,DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9,DISK] 2018-07-02 07:34:18,341 DEBUG [RS-EventLoopGroup-6-27] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:48785,DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee,DISK] 2018-07-02 07:34:18,352 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959.meta.1530516858322.meta 2018-07-02 07:34:18,353 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK], DatanodeInfoWithStorage[127.0.0.1:33954,DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9,DISK], DatanodeInfoWithStorage[127.0.0.1:48785,DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee,DISK]] 2018-07-02 07:34:18,354 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:34:18,381 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(200): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2018-07-02 07:34:18,400 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(8086): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2018-07-02 07:34:18,410 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.RegionCoprocessorHost(394): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2018-07-02 07:34:18,417 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table meta 1588230740 2018-07-02 07:34:18,418 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:18,419 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for 1588230740 2018-07-02 07:34:18,420 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for 1588230740 2018-07-02 07:34:18,431 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/info 2018-07-02 07:34:18,431 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/info 2018-07-02 07:34:18,433 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:18,433 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:18,435 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:18,440 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:34:18,440 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:34:18,441 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:18,442 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:18,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:18,446 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/table 2018-07-02 07:34:18,446 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/table 2018-07-02 07:34:18,448 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:18,448 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:18,449 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:18,450 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for 1588230740 2018-07-02 07:34:18,456 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740 2018-07-02 07:34:18,456 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for 1588230740 2018-07-02 07:34:18,456 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for 1588230740 2018-07-02 07:34:18,457 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for 1588230740 2018-07-02 07:34:18,460 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-07-02 07:34:18,461 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for 1588230740 2018-07-02 07:34:18,462 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(982): Opened 1588230740; next sequenceid=2 2018-07-02 07:34:18,462 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for 1588230740 2018-07-02 07:34:18,503 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:meta,,1.1588230740 2018-07-02 07:34:18,530 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=51263] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=2, pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:18,531 DEBUG [PEWorker-14] assignment.RegionTransitionProcedure(354): Finishing pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:18,533 INFO [PEWorker-14] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:18,541 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:meta,,1.1588230740 2018-07-02 07:34:18,543 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:meta,,1.1588230740 on asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:18,550 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/cluster1/meta-region-server 2018-07-02 07:34:18,954 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_397032014_23 at /127.0.0.1:40853 [Receiving block BP-1443818035-67.195.81.155-1530516847306:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 306ms (threshold=300ms), isSync:true, flushTotalNanos=11523ns 2018-07-02 07:34:18,963 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_397032014_23 at /127.0.0.1:49886 [Receiving block BP-1443818035-67.195.81.155-1530516847306:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 315ms (threshold=300ms), isSync:true, flushTotalNanos=13364ns 2018-07-02 07:34:18,969 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_397032014_23 at /127.0.0.1:43663 [Receiving block BP-1443818035-67.195.81.155-1530516847306:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 322ms (threshold=300ms), isSync:true, flushTotalNanos=7019ns 2018-07-02 07:34:18,970 INFO [PEWorker-14] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=1, state=RUNNABLE; InitMetaProcedure table=hbase:meta; resume parent processing. 2018-07-02 07:34:18,971 INFO [PEWorker-14] procedure2.ProcedureExecutor(1266): Finished pid=2, ppid=1, state=SUCCESS; AssignProcedure table=hbase:meta, region=1588230740 in 3.1490sec 2018-07-02 07:34:19,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=51263] assignment.AssignmentManager(989): META REPORTED: rit=OPEN, location=asf911.gq1.ygridcore.net,38972,1530516853959, table=hbase:meta, region=1588230740 2018-07-02 07:34:19,089 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=51263] assignment.AssignmentManager(991): META REPORTED but no procedure found (complete?); set location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:19,114 INFO [PEWorker-15] procedure2.ProcedureExecutor(1266): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 3.9140sec 2018-07-02 07:34:19,194 INFO [RS-EventLoopGroup-6-32] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:55252, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:34:19,283 INFO [Thread-158] master.HMaster(962): Master startup: status=Wait for region servers to report in, state=RUNNING, startTime=1530516854053, completionTime=-1 2018-07-02 07:34:19,284 INFO [Thread-158] master.ServerManager(854): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2018-07-02 07:34:19,284 DEBUG [Thread-158] assignment.AssignmentManager(1197): Joining cluster... 2018-07-02 07:34:19,292 INFO [Thread-158] assignment.AssignmentManager(1208): Number of RegionServers=3 2018-07-02 07:34:19,293 INFO [Thread-158] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1530516919293 2018-07-02 07:34:19,294 INFO [Thread-158] assignment.AssignmentManager(1216): Joined the cluster in 9msec 2018-07-02 07:34:19,346 INFO [Thread-158] master.TableNamespaceManager(96): Namespace table not found. Creating... 2018-07-02 07:34:19,353 INFO [Thread-158] master.HMaster(1886): Client=null/null create 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} 2018-07-02 07:34:19,538 DEBUG [Thread-158] procedure2.ProcedureExecutor(887): Stored pid=3, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2018-07-02 07:34:19,671 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741834_1010{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW]]} size 476 2018-07-02 07:34:19,673 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741834_1010 size 476 2018-07-02 07:34:19,673 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741834_1010 size 476 2018-07-02 07:34:20,086 DEBUG [PEWorker-3] util.FSTableDescriptors(683): Wrote into hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2018-07-02 07:34:20,091 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6931): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/.tmp Table name == hbase:namespace 2018-07-02 07:34:20,122 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:20,123 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:20,123 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:20,125 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(829): Instantiated hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:20,126 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1527): Closing a2e46a0365d875b8253d213cfc9335b7, disabling compactions & flushes 2018-07-02 07:34:20,126 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1567): Updates disabled for region hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:20,126 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1681): Closed hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:20,279 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":1530516860209},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1530516860209}]},"ts":1530516860209} 2018-07-02 07:34:20,326 INFO [PEWorker-3] hbase.MetaTableAccessor(1528): Added 1 regions to meta. 2018-07-02 07:34:20,447 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516860442}]},"ts":1530516860442} 2018-07-02 07:34:20,454 INFO [PEWorker-3] hbase.MetaTableAccessor(1673): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2018-07-02 07:34:20,490 INFO [PEWorker-3] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=a2e46a0365d875b8253d213cfc9335b7, target=asf911.gq1.ygridcore.net,46264,1530516853823}] 2018-07-02 07:34:20,548 INFO [PEWorker-3] procedure.MasterProcedureScheduler(697): pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=a2e46a0365d875b8253d213cfc9335b7, target=asf911.gq1.ygridcore.net,46264,1530516853823 checking lock on a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,568 INFO [PEWorker-3] assignment.AssignProcedure(218): Starting pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=a2e46a0365d875b8253d213cfc9335b7, target=asf911.gq1.ygridcore.net,46264,1530516853823; rit=OFFLINE, location=asf911.gq1.ygridcore.net,46264,1530516853823; forceNewPlan=false, retain=false 2018-07-02 07:34:20,724 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-07-02 07:34:20,726 INFO [PEWorker-5] assignment.RegionStateStore(199): pid=4 updating hbase:meta row=a2e46a0365d875b8253d213cfc9335b7, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:20,733 INFO [PEWorker-5] assignment.RegionTransitionProcedure(241): Dispatch pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=a2e46a0365d875b8253d213cfc9335b7, target=asf911.gq1.ygridcore.net,46264,1530516853823; rit=OPENING, location=asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:20,885 DEBUG [RSProcedureDispatcher-pool3-t2] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:20,896 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:50471, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:34:20,897 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=46264] regionserver.RSRpcServices(1983): Open hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:20,909 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => a2e46a0365d875b8253d213cfc9335b7, NAME => 'hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:34:20,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table namespace a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:20,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,920 DEBUG [StoreOpener-a2e46a0365d875b8253d213cfc9335b7-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/info 2018-07-02 07:34:20,920 DEBUG [StoreOpener-a2e46a0365d875b8253d213cfc9335b7-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/info 2018-07-02 07:34:20,922 INFO [StoreOpener-a2e46a0365d875b8253d213cfc9335b7-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:20,923 INFO [StoreOpener-a2e46a0365d875b8253d213cfc9335b7-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:20,924 INFO [StoreOpener-a2e46a0365d875b8253d213cfc9335b7-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:20,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,927 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,933 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,941 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-07-02 07:34:20,942 INFO [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened a2e46a0365d875b8253d213cfc9335b7; next sequenceid=2 2018-07-02 07:34:20,942 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for a2e46a0365d875b8253d213cfc9335b7 2018-07-02 07:34:20,948 INFO [PostOpenDeployTasks:a2e46a0365d875b8253d213cfc9335b7] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:20,954 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=51263] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=2, pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=a2e46a0365d875b8253d213cfc9335b7, target=asf911.gq1.ygridcore.net,46264,1530516853823; rit=OPENING, location=asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:20,955 DEBUG [PEWorker-6] assignment.RegionTransitionProcedure(354): Finishing pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:namespace, region=a2e46a0365d875b8253d213cfc9335b7, target=asf911.gq1.ygridcore.net,46264,1530516853823; rit=OPENING, location=asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:20,957 DEBUG [PostOpenDeployTasks:a2e46a0365d875b8253d213cfc9335b7] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:20,959 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. on asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:20,959 INFO [PEWorker-6] assignment.RegionStateStore(199): pid=4 updating hbase:meta row=a2e46a0365d875b8253d213cfc9335b7, regionState=OPEN, openSeqNum=2, regionLocation=asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:21,073 INFO [PEWorker-6] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=3, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=hbase:namespace; resume parent processing. 2018-07-02 07:34:21,073 INFO [PEWorker-6] procedure2.ProcedureExecutor(1266): Finished pid=4, ppid=3, state=SUCCESS; AssignProcedure table=hbase:namespace, region=a2e46a0365d875b8253d213cfc9335b7, target=asf911.gq1.ygridcore.net,46264,1530516853823 in 480msec 2018-07-02 07:34:21,074 DEBUG [PEWorker-16] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516861073}]},"ts":1530516861073} 2018-07-02 07:34:21,081 INFO [PEWorker-16] hbase.MetaTableAccessor(1673): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2018-07-02 07:34:21,179 DEBUG [Thread-158] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/namespace 2018-07-02 07:34:21,224 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster1/namespace 2018-07-02 07:34:21,260 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:50473, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:34:21,274 INFO [PEWorker-16] procedure2.ProcedureExecutor(1266): Finished pid=3, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 1.8190sec 2018-07-02 07:34:21,440 DEBUG [Thread-158] procedure2.ProcedureExecutor(887): Stored pid=5, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2018-07-02 07:34:21,716 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/namespace 2018-07-02 07:34:21,889 INFO [PEWorker-7] procedure2.ProcedureExecutor(1266): Finished pid=5, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 540msec 2018-07-02 07:34:22,081 DEBUG [Thread-158] procedure2.ProcedureExecutor(887): Stored pid=6, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2018-07-02 07:34:22,349 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/namespace 2018-07-02 07:34:22,402 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(135): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-07-02 07:34:22,403 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(139): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2018-07-02 07:34:22,480 INFO [PEWorker-1] procedure2.ProcedureExecutor(1266): Finished pid=6, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 524msec 2018-07-02 07:34:22,516 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/cluster1/namespace/default 2018-07-02 07:34:22,532 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/cluster1/namespace/hbase 2018-07-02 07:34:22,533 INFO [Thread-158] master.HMaster(1009): Master has completed initialization 8.447sec 2018-07-02 07:34:22,540 INFO [Thread-158] quotas.MasterQuotaManager(90): Quota support disabled 2018-07-02 07:34:22,540 INFO [Thread-158] zookeeper.ZKWatcher(205): not a secure deployment, proceeding 2018-07-02 07:34:22,556 DEBUG [Thread-158] master.HMaster(1067): Balancer post startup initialization complete, took 0 seconds 2018-07-02 07:34:22,620 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:34:22,620 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x15d8f6fc to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:22,621 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:34:22,634 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ea0a3e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:22,678 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:55278, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:34:22,694 INFO [Time-limited test] hbase.HBaseTestingUtility(1044): Minicluster is up; activeMaster=asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:22,694 INFO [Time-limited test] hbase.HBaseTestingUtility(953): Starting up minicluster with 2 master(s) and 3 regionserver(s) and 3 datanode(s) 2018-07-02 07:34:22,694 INFO [Time-limited test] hbase.HBaseZKTestingUtility(85): Created new mini-cluster data directory: /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688, deleteOnExit=true 2018-07-02 07:34:22,694 INFO [Time-limited test] hbase.HBaseTestingUtility(968): STARTING DFS 2018-07-02 07:34:22,726 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting test.cache.data to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cache_data in system properties and HBase conf 2018-07-02 07:34:22,726 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting hadoop.tmp.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/hadoop_tmp in system properties and HBase conf 2018-07-02 07:34:22,727 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting hadoop.log.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/hadoop_logs in system properties and HBase conf 2018-07-02 07:34:22,727 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/mapred_local in system properties and HBase conf 2018-07-02 07:34:22,727 INFO [Time-limited test] hbase.HBaseTestingUtility(745): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/mapred_temp in system properties and HBase conf 2018-07-02 07:34:22,727 INFO [Time-limited test] hbase.HBaseTestingUtility(736): read short circuit is OFF 2018-07-02 07:34:22,727 DEBUG [Time-limited test] fs.HFileSystem(317): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2018-07-02 07:34:22,969 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:22,976 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs to /tmp/Jetty_localhost_38794_hdfs____.68ja6h/webapp 2018-07-02 07:34:23,114 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38794 2018-07-02 07:34:23,379 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:23,387 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_46194_datanode____ytcjzl/webapp 2018-07-02 07:34:23,662 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:23,662 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:23,662 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:23,662 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:23,662 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:23,662 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:23,666 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46194 2018-07-02 07:34:23,735 WARN [Time-limited test] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:34:23,890 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:23,897 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_51439_datanode____.cz8nmj/webapp 2018-07-02 07:34:24,033 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:51439 2018-07-02 07:34:24,140 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-07-02 07:34:24,147 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/yetus-m2/hbase-flaky-tests/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_43026_datanode____d05tz0/webapp 2018-07-02 07:34:24,283 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43026 2018-07-02 07:34:24,355 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:42386] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-07-02 07:34:24,368 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c464750f2c: from storage DS-38565b32-54b2-419a-97c3-f65c173a0df3 node DatanodeRegistration(127.0.0.1:51748, datanodeUuid=cdb07aeb-994c-4550-947f-aefb5ff86d14, infoPort=54600, infoSecurePort=0, ipcPort=34583, storageInfo=lv=-56;cid=testClusterID;nsid=1259919280;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2018-07-02 07:34:24,368 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c464750f2c: from storage DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82 node DatanodeRegistration(127.0.0.1:51748, datanodeUuid=cdb07aeb-994c-4550-947f-aefb5ff86d14, infoPort=54600, infoSecurePort=0, ipcPort=34583, storageInfo=lv=-56;cid=testClusterID;nsid=1259919280;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-07-02 07:34:24,567 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:42386] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-07-02 07:34:24,579 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c47108e5a5: from storage DS-5924c3e7-0126-4318-ab71-97788504e4c7 node DatanodeRegistration(127.0.0.1:49540, datanodeUuid=1da93264-64ed-4e05-a498-052ff07ffff7, infoPort=50239, infoSecurePort=0, ipcPort=33404, storageInfo=lv=-56;cid=testClusterID;nsid=1259919280;c=0), blocks: 0, hasStaleStorage: true, processing time: 2 msecs 2018-07-02 07:34:24,579 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c47108e5a5: from storage DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8 node DatanodeRegistration(127.0.0.1:49540, datanodeUuid=1da93264-64ed-4e05-a498-052ff07ffff7, infoPort=50239, infoSecurePort=0, ipcPort=33404, storageInfo=lv=-56;cid=testClusterID;nsid=1259919280;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-07-02 07:34:24,828 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:42386] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-07-02 07:34:24,839 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c4809071ac: from storage DS-c02e3dde-4ee5-4268-849e-c97455f318a6 node DatanodeRegistration(127.0.0.1:38320, datanodeUuid=fa83b393-e169-412c-916b-f583c57843d3, infoPort=43394, infoSecurePort=0, ipcPort=45159, storageInfo=lv=-56;cid=testClusterID;nsid=1259919280;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2018-07-02 07:34:24,839 INFO [Block report processor] blockmanagement.BlockManager(1933): BLOCK* processReport 0x1de6c4809071ac: from storage DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad node DatanodeRegistration(127.0.0.1:38320, datanodeUuid=fa83b393-e169-412c-916b-f583c57843d3, infoPort=43394, infoSecurePort=0, ipcPort=45159, storageInfo=lv=-56;cid=testClusterID;nsid=1259919280;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-07-02 07:34:24,843 DEBUG [Time-limited test] hbase.HBaseTestingUtility(671): Setting hbase.rootdir to /home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1 2018-07-02 07:34:24,845 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:24,846 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:24,889 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:34:24,890 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:24,891 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:24,896 INFO [Time-limited test] util.FSUtils(515): Created version file at hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 with version=8 2018-07-02 07:34:24,900 INFO [Time-limited test] hbase.HBaseTestingUtility(1212): Setting hbase.fs.tmp.dir to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/hbase-staging 2018-07-02 07:34:24,903 INFO [Time-limited test] client.ConnectionUtils(122): master/asf911:0 server-side Connection retries=45 2018-07-02 07:34:24,903 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:24,903 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:24,904 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:24,904 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:24,907 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:44014 2018-07-02 07:34:24,908 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:24,908 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:24,909 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:24,911 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:24,912 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=master:44014 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:24,924 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:440140x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:24,927 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): master:44014-0x16459e9b450000b connected 2018-07-02 07:34:24,991 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/master 2018-07-02 07:34:24,992 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/running 2018-07-02 07:34:24,993 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44014 2018-07-02 07:34:24,995 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=44014 2018-07-02 07:34:24,996 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44014 2018-07-02 07:34:24,996 INFO [Time-limited test] master.HMaster(495): hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950, hbase.cluster.distributed=false 2018-07-02 07:34:24,999 INFO [Time-limited test] client.ConnectionUtils(122): master/asf911:0 server-side Connection retries=45 2018-07-02 07:34:24,999 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:25,000 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:25,000 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:25,000 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:25,003 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:54338 2018-07-02 07:34:25,005 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,005 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,006 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,007 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,008 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=master:54338 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:25,016 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:543380x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:25,017 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): master:54338-0x16459e9b450000c connected 2018-07-02 07:34:25,027 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:54338-0x16459e9b450000c, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/master 2018-07-02 07:34:25,028 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:54338-0x16459e9b450000c, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/running 2018-07-02 07:34:25,029 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=54338 2018-07-02 07:34:25,030 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=54338 2018-07-02 07:34:25,031 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=54338 2018-07-02 07:34:25,031 INFO [Time-limited test] master.HMaster(495): hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950, hbase.cluster.distributed=false 2018-07-02 07:34:25,058 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:34:25,058 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:25,058 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:25,058 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:25,059 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:25,059 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:34:25,062 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:43014 2018-07-02 07:34:25,064 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,064 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,066 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,067 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,069 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:43014 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:25,083 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:430140x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:25,084 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:43014-0x16459e9b450000d connected 2018-07-02 07:34:25,084 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/master 2018-07-02 07:34:25,085 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/running 2018-07-02 07:34:25,086 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43014 2018-07-02 07:34:25,087 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=43014 2018-07-02 07:34:25,088 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43014 2018-07-02 07:34:25,114 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:34:25,114 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:25,115 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:25,115 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:25,115 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:25,115 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:34:25,118 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:33727 2018-07-02 07:34:25,119 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,119 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,121 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,122 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,124 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:33727 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:25,132 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:337270x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:25,134 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:33727-0x16459e9b450000e connected 2018-07-02 07:34:25,134 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/master 2018-07-02 07:34:25,135 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/running 2018-07-02 07:34:25,136 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33727 2018-07-02 07:34:25,138 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=33727 2018-07-02 07:34:25,138 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33727 2018-07-02 07:34:25,164 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:34:25,164 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:25,165 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:25,165 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:25,165 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:25,165 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:34:25,169 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.155:38428 2018-07-02 07:34:25,170 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,170 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,172 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,174 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,176 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:38428 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:25,211 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:384280x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:25,213 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:38428-0x16459e9b450000f connected 2018-07-02 07:34:25,213 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/master 2018-07-02 07:34:25,214 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/running 2018-07-02 07:34:25,216 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38428 2018-07-02 07:34:25,217 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=38428 2018-07-02 07:34:25,218 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38428 2018-07-02 07:34:25,221 INFO [Thread-409] master.HMaster(2108): Adding backup master ZNode /cluster2/backup-masters/asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:25,223 INFO [Thread-410] master.HMaster(2108): Adding backup master ZNode /cluster2/backup-masters/asf911.gq1.ygridcore.net,54338,1530516864997 2018-07-02 07:34:25,233 DEBUG [Thread-409] zookeeper.ZKUtil(355): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/backup-masters/asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:25,233 DEBUG [Thread-410] zookeeper.ZKUtil(355): master:54338-0x16459e9b450000c, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/backup-masters/asf911.gq1.ygridcore.net,54338,1530516864997 2018-07-02 07:34:25,241 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/master 2018-07-02 07:34:25,241 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:54338-0x16459e9b450000c, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/master 2018-07-02 07:34:25,241 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/master 2018-07-02 07:34:25,241 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/master 2018-07-02 07:34:25,241 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/master 2018-07-02 07:34:25,251 DEBUG [Thread-409] zookeeper.ZKUtil(355): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/master 2018-07-02 07:34:25,252 DEBUG [Thread-410] zookeeper.ZKUtil(355): master:54338-0x16459e9b450000c, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/master 2018-07-02 07:34:25,254 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): master:54338-0x16459e9b450000c, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/master 2018-07-02 07:34:25,254 INFO [Thread-409] master.ActiveMasterManager(172): Deleting ZNode for /cluster2/backup-masters/asf911.gq1.ygridcore.net,44014,1530516864901 from backup master directory 2018-07-02 07:34:25,254 INFO [Thread-410] master.ActiveMasterManager(218): Another master is the active master, asf911.gq1.ygridcore.net,44014,1530516864901; waiting to become the next active master 2018-07-02 07:34:25,254 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/master 2018-07-02 07:34:25,265 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/backup-masters/asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:25,266 WARN [Thread-409] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:25,266 INFO [Thread-409] master.ActiveMasterManager(181): Registered as active master=asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:25,302 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:25,303 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:25,304 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741826_1002 size 42 2018-07-02 07:34:25,307 DEBUG [Thread-409] util.FSUtils(667): Created cluster ID file at hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/hbase.id with ID: 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:25,316 INFO [Thread-409] master.MasterFileSystem(393): BOOTSTRAP: creating hbase:meta region 2018-07-02 07:34:25,317 INFO [Thread-409] regionserver.HRegion(6931): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'false', BLOCKSIZE => '8192'}, {NAME => 'rep_barrier', VERSIONS => '2147483647', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'table', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 Table name == hbase:meta 2018-07-02 07:34:25,341 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:34:25,341 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:34:25,341 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|FINALIZED], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:25,343 DEBUG [Thread-409] regionserver.HRegion(829): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:25,351 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info 2018-07-02 07:34:25,352 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,353 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:25,354 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:25,358 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:34:25,358 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,359 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:25,360 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:25,364 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table 2018-07-02 07:34:25,365 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:25,365 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:25,367 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:25,367 DEBUG [Thread-409] regionserver.HRegion(925): replaying wal for 1588230740 2018-07-02 07:34:25,371 DEBUG [Thread-409] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740 2018-07-02 07:34:25,371 DEBUG [Thread-409] regionserver.HRegion(933): stopping wal replay for 1588230740 2018-07-02 07:34:25,371 DEBUG [Thread-409] regionserver.HRegion(945): Cleaning up temporary data for 1588230740 2018-07-02 07:34:25,373 DEBUG [Thread-409] regionserver.HRegion(956): Cleaning up detritus for 1588230740 2018-07-02 07:34:25,375 DEBUG [Thread-409] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-07-02 07:34:25,376 DEBUG [Thread-409] regionserver.HRegion(978): writing seq id for 1588230740 2018-07-02 07:34:25,382 DEBUG [Thread-409] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-07-02 07:34:25,382 INFO [Thread-409] regionserver.HRegion(982): Opened 1588230740; next sequenceid=2 2018-07-02 07:34:25,382 DEBUG [Thread-409] regionserver.HRegion(1527): Closing 1588230740, disabling compactions & flushes 2018-07-02 07:34:25,383 DEBUG [Thread-409] regionserver.HRegion(1567): Updates disabled for region hbase:meta,,1.1588230740 2018-07-02 07:34:25,383 INFO [Thread-409] regionserver.HRegion(1681): Closed hbase:meta,,1.1588230740 2018-07-02 07:34:25,411 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:25,412 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:25,412 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:34:25,416 DEBUG [Thread-409] util.FSTableDescriptors(683): Wrote into hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2018-07-02 07:34:25,446 INFO [Thread-409] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:25,447 INFO [Thread-409] coordination.ZKSplitLogManagerCoordination(494): Found 0 orphan tasks and 0 rescan nodes 2018-07-02 07:34:25,466 INFO [Thread-409] zookeeper.ReadOnlyZKClient(139): Connect 0x247f9686 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:25,475 DEBUG [Thread-409] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32b6f90c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:25,479 INFO [Thread-409] procedure2.ProcedureExecutor(528): Starting 16 core workers (bigger of cpus/4 or 16) with max (burst) worker count=160 2018-07-02 07:34:25,482 INFO [Thread-409] wal.WALProcedureStore(1077): Rolled new Procedure Store WAL, id=1 2018-07-02 07:34:25,484 INFO [Thread-409] procedure2.ProcedureExecutor(547): Recovered WALProcedureStore lease in 4msec 2018-07-02 07:34:25,484 INFO [Thread-409] procedure2.ProcedureExecutor(561): Loaded WALProcedureStore in 0msec 2018-07-02 07:34:25,484 INFO [Thread-409] procedure2.RemoteProcedureDispatcher(97): Instantiated, coreThreads=128 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2018-07-02 07:34:25,485 DEBUG [Thread-409] zookeeper.ZKUtil(614): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Unable to get data of znode /cluster2/meta-region-server because node does not exist (not an error) 2018-07-02 07:34:25,516 INFO [Thread-409] balancer.BaseLoadBalancer(1039): slop=0.001, tablesOnMaster=false, systemTablesOnMaster=false 2018-07-02 07:34:25,516 INFO [Thread-409] balancer.StochasticLoadBalancer(216): Loaded config; maxSteps=1000000, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, etc. 2018-07-02 07:34:25,535 DEBUG [Thread-409] zookeeper.ZKUtil(357): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/balancer 2018-07-02 07:34:25,536 DEBUG [Thread-409] zookeeper.ZKUtil(357): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/normalizer 2018-07-02 07:34:25,541 DEBUG [Thread-409] zookeeper.ZKUtil(357): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/switch/split 2018-07-02 07:34:25,542 DEBUG [Thread-409] zookeeper.ZKUtil(357): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/switch/merge 2018-07-02 07:34:25,549 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/running 2018-07-02 07:34:25,551 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/running 2018-07-02 07:34:25,549 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/running 2018-07-02 07:34:25,549 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/running 2018-07-02 07:34:25,551 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:54338-0x16459e9b450000c, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/running 2018-07-02 07:34:25,551 INFO [Thread-409] master.HMaster(787): Active/primary master=asf911.gq1.ygridcore.net,44014,1530516864901, sessionid=0x16459e9b450000b, setting cluster-up flag (Was=false) 2018-07-02 07:34:25,566 INFO [M:1;asf911:54338] regionserver.HRegionServer(874): ClusterId : 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:25,599 DEBUG [Thread-409] procedure.ZKProcedureUtil(272): Clearing all znodes /cluster2/flush-table-proc/acquired, /cluster2/flush-table-proc/reached, /cluster2/flush-table-proc/abort 2018-07-02 07:34:25,601 DEBUG [Thread-409] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:25,649 DEBUG [Thread-409] procedure.ZKProcedureUtil(272): Clearing all znodes /cluster2/online-snapshot/acquired, /cluster2/online-snapshot/reached, /cluster2/online-snapshot/abort 2018-07-02 07:34:25,651 DEBUG [Thread-409] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:25,654 INFO [Thread-409] master.ServerManager(1104): No .lastflushedseqids found athdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/.lastflushedseqids will record last flushed sequence id for regions by regionserver report all over again 2018-07-02 07:34:25,727 INFO [RS:2;asf911:38428] regionserver.HRegionServer(874): ClusterId : 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:25,727 INFO [RS:1;asf911:33727] regionserver.HRegionServer(874): ClusterId : 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:25,727 INFO [RS:0;asf911:43014] regionserver.HRegionServer(874): ClusterId : 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:25,730 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:34:25,728 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:34:25,731 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:34:25,750 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:34:25,750 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:34:25,750 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:34:25,750 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:34:25,750 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:34:25,753 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:34:25,766 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:34:25,766 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:34:25,767 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:34:25,771 INFO [RS:2;asf911:38428] zookeeper.ReadOnlyZKClient(139): Connect 0x0caccc47 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:25,771 INFO [RS:1;asf911:33727] zookeeper.ReadOnlyZKClient(139): Connect 0x40eff960 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:25,775 INFO [RS:0;asf911:43014] zookeeper.ReadOnlyZKClient(139): Connect 0x703fa212 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:25,825 DEBUG [RS:1;asf911:33727] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e280f0d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:25,825 DEBUG [RS:0;asf911:43014] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ef84265, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:25,825 DEBUG [RS:1;asf911:33727] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a179210, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:25,825 DEBUG [RS:2;asf911:38428] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26b23a2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:25,826 DEBUG [RS:0;asf911:43014] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ac7c962, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:25,826 DEBUG [RS:2;asf911:38428] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@654a737, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:25,826 DEBUG [RS:0;asf911:43014] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:0;asf911:43014 2018-07-02 07:34:25,826 DEBUG [RS:1;asf911:33727] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:1;asf911:33727 2018-07-02 07:34:25,826 INFO [RS:0;asf911:43014] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:34:25,826 INFO [RS:0;asf911:43014] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:34:25,826 DEBUG [RS:2;asf911:38428] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:2;asf911:38428 2018-07-02 07:34:25,827 INFO [RS:2;asf911:38428] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:34:25,827 INFO [RS:2;asf911:38428] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:34:25,826 INFO [RS:1;asf911:33727] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:34:25,827 INFO [RS:1;asf911:33727] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:34:25,827 INFO [RS:0;asf911:43014] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=43014, startcode=1530516865056 2018-07-02 07:34:25,828 INFO [RS:1;asf911:33727] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=33727, startcode=1530516865112 2018-07-02 07:34:25,828 INFO [RS:2;asf911:38428] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=38428, startcode=1530516865163 2018-07-02 07:34:25,854 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:33979, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:34:25,857 INFO [RS-EventLoopGroup-9-4] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:42911, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:34:25,857 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:54358, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:34:25,858 DEBUG [RS:0;asf911:43014] regionserver.HRegionServer(2625): Master is not running yet 2018-07-02 07:34:25,858 WARN [RS:0;asf911:43014] regionserver.HRegionServer(950): reportForDuty failed; sleeping and then retrying. 2018-07-02 07:34:25,858 DEBUG [RS:2;asf911:38428] regionserver.HRegionServer(2625): Master is not running yet 2018-07-02 07:34:25,858 WARN [RS:2;asf911:38428] regionserver.HRegionServer(950): reportForDuty failed; sleeping and then retrying. 2018-07-02 07:34:25,858 DEBUG [RS:1;asf911:33727] regionserver.HRegionServer(2625): Master is not running yet 2018-07-02 07:34:25,859 WARN [RS:1;asf911:33727] regionserver.HRegionServer(950): reportForDuty failed; sleeping and then retrying. 2018-07-02 07:34:25,862 DEBUG [Thread-409] procedure2.ProcedureExecutor(887): Stored pid=1, state=RUNNABLE:INIT_META_ASSIGN_META; InitMetaProcedure table=hbase:meta 2018-07-02 07:34:25,864 DEBUG [Thread-409] executor.ExecutorService(92): Starting executor service name=MASTER_OPEN_REGION-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:25,865 DEBUG [Thread-409] executor.ExecutorService(92): Starting executor service name=MASTER_CLOSE_REGION-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:25,865 DEBUG [Thread-409] executor.ExecutorService(92): Starting executor service name=MASTER_SERVER_OPERATIONS-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:25,865 DEBUG [Thread-409] executor.ExecutorService(92): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/asf911:0, corePoolSize=5, maxPoolSize=5 2018-07-02 07:34:25,865 DEBUG [Thread-409] executor.ExecutorService(92): Starting executor service name=M_LOG_REPLAY_OPS-master/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:25,865 DEBUG [Thread-409] executor.ExecutorService(92): Starting executor service name=MASTER_TABLE_OPERATIONS-master/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:25,869 INFO [Thread-409] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.ProcedureExecutor$CompletedProcedureCleaner; timeout=30000, timestamp=1530516895869 2018-07-02 07:34:25,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740}] 2018-07-02 07:34:25,870 DEBUG [Thread-409] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2018-07-02 07:34:25,870 INFO [Thread-409] zookeeper.RecoverableZooKeeper(106): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:25,871 DEBUG [Thread-409] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2018-07-02 07:34:25,871 DEBUG [Thread-409] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2018-07-02 07:34:25,871 INFO [Thread-409] cleaner.LogCleaner(122): Creating OldWALs cleaners with size=2 2018-07-02 07:34:25,874 DEBUG [Thread-409] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2018-07-02 07:34:25,875 DEBUG [Thread-409] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2018-07-02 07:34:25,875 DEBUG [Thread-409] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2018-07-02 07:34:25,875 DEBUG [Thread-409] cleaner.HFileCleaner(207): Starting for large file=Thread[Thread-409-HFileCleaner.large.0-1530516865875,5,FailOnTimeoutGroup] 2018-07-02 07:34:25,875 DEBUG [Thread-409] cleaner.HFileCleaner(222): Starting for small files=Thread[Thread-409-HFileCleaner.small.0-1530516865875,5,FailOnTimeoutGroup] 2018-07-02 07:34:25,907 DEBUG [Thread-409-EventThread] zookeeper.ZKWatcher(478): replicationLogCleaner0x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:25,908 DEBUG [Thread-409-EventThread] zookeeper.ZKWatcher(543): replicationLogCleaner-0x16459e9b4500014 connected 2018-07-02 07:34:25,953 INFO [PEWorker-1] procedure.MasterProcedureScheduler(697): pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 checking lock on 1588230740 2018-07-02 07:34:25,954 INFO [PEWorker-1] assignment.AssignProcedure(218): Starting pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; forceNewPlan=false, retain=false 2018-07-02 07:34:26,104 WARN [master/asf911:0] assignment.AssignmentManager(1669): No servers available; cannot place 1 unassigned regions. 2018-07-02 07:34:26,860 INFO [RS:0;asf911:43014] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=43014, startcode=1530516865056 2018-07-02 07:34:26,860 INFO [RS:1;asf911:33727] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=33727, startcode=1530516865112 2018-07-02 07:34:26,860 INFO [RS:2;asf911:38428] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=38428, startcode=1530516865163 2018-07-02 07:34:26,866 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=44014] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:26,866 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:26,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:26,867 DEBUG [RS:1;asf911:33727] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 2018-07-02 07:34:26,867 DEBUG [RS:0;asf911:43014] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 2018-07-02 07:34:26,869 DEBUG [RS:2;asf911:38428] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 2018-07-02 07:34:26,869 DEBUG [RS:1;asf911:33727] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:42386 2018-07-02 07:34:26,872 DEBUG [RS:1;asf911:33727] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:34:26,870 DEBUG [RS:0;asf911:43014] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:42386 2018-07-02 07:34:26,872 DEBUG [RS:2;asf911:38428] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:42386 2018-07-02 07:34:26,872 DEBUG [RS:0;asf911:43014] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:34:26,872 DEBUG [RS:2;asf911:38428] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:34:26,909 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:26,933 DEBUG [RS:1;asf911:33727] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:26,933 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,43014,1530516865056] 2018-07-02 07:34:26,933 WARN [RS:1;asf911:33727] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:26,933 DEBUG [RS:0;asf911:43014] zookeeper.ZKUtil(355): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:26,933 DEBUG [RS:2;asf911:38428] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:26,933 WARN [RS:2;asf911:38428] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:26,933 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:34:26,933 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:34:26,933 INFO [RS:2;asf911:38428] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:26,933 WARN [RS:0;asf911:43014] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:26,934 INFO [RS:0;asf911:43014] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:26,933 INFO [RS:1;asf911:33727] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:26,934 DEBUG [RS:0;asf911:43014] regionserver.HRegionServer(1815): logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:26,934 DEBUG [RS:1;asf911:33727] regionserver.HRegionServer(1815): logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:26,934 DEBUG [RS:2;asf911:38428] regionserver.HRegionServer(1815): logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:26,963 DEBUG [RS:2;asf911:38428] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:26,964 DEBUG [RS:0;asf911:43014] zookeeper.ZKUtil(355): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:26,964 DEBUG [RS:2;asf911:38428] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:26,964 DEBUG [RS:0;asf911:43014] zookeeper.ZKUtil(355): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:26,964 DEBUG [RS:1;asf911:33727] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:26,965 DEBUG [RS:2;asf911:38428] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:26,965 DEBUG [RS:0;asf911:43014] zookeeper.ZKUtil(355): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:26,966 DEBUG [RS:1;asf911:33727] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:26,966 DEBUG [RS:1;asf911:33727] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:26,966 DEBUG [RS:2;asf911:38428] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:34:26,966 DEBUG [RS:0;asf911:43014] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:34:26,967 INFO [RS:0;asf911:43014] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:34:26,967 INFO [RS:2;asf911:38428] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:34:26,968 DEBUG [RS:1;asf911:33727] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:34:26,968 INFO [RS:1;asf911:33727] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:34:26,975 INFO [RS:2;asf911:38428] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:34:26,975 INFO [RS:0;asf911:43014] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:34:26,975 INFO [RS:1;asf911:33727] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:34:26,977 INFO [RS:0;asf911:43014] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:34:26,977 INFO [RS:1;asf911:33727] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:34:26,977 INFO [RS:2;asf911:38428] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:34:26,977 INFO [RS:0;asf911:43014] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:34:26,977 INFO [RS:1;asf911:33727] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:34:26,978 INFO [RS:2;asf911:38428] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:34:26,988 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,988 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,988 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,988 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,988 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,989 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,989 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,989 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,989 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,989 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,989 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,989 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,990 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,990 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,990 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:26,990 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:26,990 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,990 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:26,990 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:26,990 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:26,991 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:26,991 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,991 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,991 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:26,991 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:26,991 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:26,991 DEBUG [RS:1;asf911:33727] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,991 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:26,992 DEBUG [RS:2;asf911:38428] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:26,992 DEBUG [RS:0;asf911:43014] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:27,017 INFO [SplitLogWorker-asf911:38428] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,38428,1530516865163 starting 2018-07-02 07:34:27,017 INFO [RS:2;asf911:38428] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:34:27,019 INFO [RS:0;asf911:43014] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:34:27,019 INFO [SplitLogWorker-asf911:43014] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,43014,1530516865056 starting 2018-07-02 07:34:27,022 INFO [RS:1;asf911:33727] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:34:27,022 INFO [SplitLogWorker-asf911:33727] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,33727,1530516865112 starting 2018-07-02 07:34:27,040 INFO [RS:2;asf911:38428] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,38428,1530516865163, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:38428, sessionid=0x16459e9b450000f 2018-07-02 07:34:27,041 INFO [RS:0;asf911:43014] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,43014,1530516865056, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:43014, sessionid=0x16459e9b450000d 2018-07-02 07:34:27,041 INFO [RS:1;asf911:33727] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,33727,1530516865112, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:33727, sessionid=0x16459e9b450000e 2018-07-02 07:34:27,041 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:34:27,041 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:34:27,041 DEBUG [RS:0;asf911:43014] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:27,041 DEBUG [RS:0;asf911:43014] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,43014,1530516865056' 2018-07-02 07:34:27,041 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:34:27,041 DEBUG [RS:0;asf911:43014] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/flush-table-proc/abort' 2018-07-02 07:34:27,041 DEBUG [RS:2;asf911:38428] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:27,042 DEBUG [RS:2;asf911:38428] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,38428,1530516865163' 2018-07-02 07:34:27,042 DEBUG [RS:2;asf911:38428] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/flush-table-proc/abort' 2018-07-02 07:34:27,041 DEBUG [RS:1;asf911:33727] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:27,042 DEBUG [RS:1;asf911:33727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,33727,1530516865112' 2018-07-02 07:34:27,042 DEBUG [RS:1;asf911:33727] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/flush-table-proc/abort' 2018-07-02 07:34:27,042 DEBUG [RS:0;asf911:43014] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/flush-table-proc/acquired' 2018-07-02 07:34:27,042 DEBUG [RS:2;asf911:38428] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/flush-table-proc/acquired' 2018-07-02 07:34:27,045 DEBUG [RS:1;asf911:33727] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/flush-table-proc/acquired' 2018-07-02 07:34:27,045 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:34:27,045 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:34:27,045 DEBUG [RS:0;asf911:43014] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:27,045 DEBUG [RS:0;asf911:43014] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,43014,1530516865056' 2018-07-02 07:34:27,045 DEBUG [RS:0;asf911:43014] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/online-snapshot/abort' 2018-07-02 07:34:27,045 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:34:27,045 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:34:27,046 DEBUG [RS:1;asf911:33727] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:27,046 DEBUG [RS:1;asf911:33727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,33727,1530516865112' 2018-07-02 07:34:27,046 DEBUG [RS:1;asf911:33727] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/online-snapshot/abort' 2018-07-02 07:34:27,045 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:34:27,046 DEBUG [RS:0;asf911:43014] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/online-snapshot/acquired' 2018-07-02 07:34:27,046 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:34:27,046 DEBUG [RS:2;asf911:38428] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:27,046 DEBUG [RS:2;asf911:38428] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,38428,1530516865163' 2018-07-02 07:34:27,047 DEBUG [RS:2;asf911:38428] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/online-snapshot/abort' 2018-07-02 07:34:27,046 DEBUG [RS:1;asf911:33727] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/online-snapshot/acquired' 2018-07-02 07:34:27,047 DEBUG [RS:0;asf911:43014] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:34:27,047 INFO [RS:0;asf911:43014] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:34:27,047 INFO [RS:0;asf911:43014] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:34:27,047 DEBUG [RS:1;asf911:33727] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:34:27,047 INFO [RS:1;asf911:33727] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:34:27,047 INFO [RS:1;asf911:33727] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:34:27,047 DEBUG [RS:2;asf911:38428] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/online-snapshot/acquired' 2018-07-02 07:34:27,048 DEBUG [RS:2;asf911:38428] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:34:27,048 INFO [RS:2;asf911:38428] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:34:27,048 INFO [RS:2;asf911:38428] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:34:27,105 DEBUG [master/asf911:0] assignment.AssignmentManager(1690): Processing assignQueue; systemServersCount=3, allServersCount=3 2018-07-02 07:34:27,112 INFO [PEWorker-3] assignment.AssignProcedure(246): Early suspend! pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,052 WARN [RS:0;asf911:43014] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:28,053 INFO [RS:0;asf911:43014] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C43014%2C1530516865056, suffix=, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:28,053 WARN [RS:1;asf911:33727] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:28,053 INFO [RS:1;asf911:33727] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C33727%2C1530516865112, suffix=, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:28,054 WARN [RS:2;asf911:38428] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:28,054 INFO [RS:2;asf911:38428] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C38428%2C1530516865163, suffix=, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:28,080 DEBUG [RS-EventLoopGroup-13-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK] 2018-07-02 07:34:28,083 DEBUG [RS-EventLoopGroup-13-7] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK] 2018-07-02 07:34:28,083 DEBUG [RS-EventLoopGroup-13-6] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:34:28,095 INFO [RS:0;asf911:43014] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 2018-07-02 07:34:28,099 DEBUG [RS:0;asf911:43014] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK], DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK]] 2018-07-02 07:34:28,120 DEBUG [RS-EventLoopGroup-13-11] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:34:28,120 DEBUG [RS-EventLoopGroup-13-14] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-5924c3e7-0126-4318-ab71-97788504e4c7,DISK] 2018-07-02 07:34:28,120 DEBUG [RS-EventLoopGroup-13-10] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK] 2018-07-02 07:34:28,120 DEBUG [RS-EventLoopGroup-13-13] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK] 2018-07-02 07:34:28,135 DEBUG [RS-EventLoopGroup-13-16] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:34:28,143 DEBUG [RS-EventLoopGroup-13-12] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:34:28,154 INFO [RS:2;asf911:38428] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 2018-07-02 07:34:28,154 INFO [RS:1;asf911:33727] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516868070 2018-07-02 07:34:28,155 DEBUG [RS:2;asf911:38428] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK]] 2018-07-02 07:34:28,155 DEBUG [RS:1;asf911:33727] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK], DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-5924c3e7-0126-4318-ab71-97788504e4c7,DISK]] 2018-07-02 07:34:28,160 INFO [PEWorker-4] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,195 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(466): META region location doesn't exist, create it 2018-07-02 07:34:28,216 INFO [PEWorker-4] assignment.RegionTransitionProcedure(241): Dispatch pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,369 DEBUG [RSProcedureDispatcher-pool13-t1] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,373 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:48377, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:34:28,373 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=33727] regionserver.RSRpcServices(1983): Open hbase:meta,,1.1588230740 2018-07-02 07:34:28,375 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:28,380 WARN [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:28,380 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C33727%2C1530516865112.meta, suffix=.meta, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:28,388 DEBUG [RS-EventLoopGroup-13-24] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK] 2018-07-02 07:34:28,389 DEBUG [RS-EventLoopGroup-13-25] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:34:28,389 DEBUG [RS-EventLoopGroup-13-26] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:34:28,403 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.meta.1530516868381.meta 2018-07-02 07:34:28,404 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK]] 2018-07-02 07:34:28,404 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:34:28,405 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(200): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2018-07-02 07:34:28,405 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(8086): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2018-07-02 07:34:28,407 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.RegionCoprocessorHost(394): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2018-07-02 07:34:28,407 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table meta 1588230740 2018-07-02 07:34:28,407 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:28,407 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for 1588230740 2018-07-02 07:34:28,408 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for 1588230740 2018-07-02 07:34:28,414 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info 2018-07-02 07:34:28,414 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info 2018-07-02 07:34:28,415 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:28,416 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:28,417 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:28,419 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:34:28,419 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:34:28,420 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:28,420 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:28,421 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:28,424 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table 2018-07-02 07:34:28,424 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table 2018-07-02 07:34:28,424 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:28,425 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:28,426 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:28,426 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for 1588230740 2018-07-02 07:34:28,431 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740 2018-07-02 07:34:28,431 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for 1588230740 2018-07-02 07:34:28,431 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for 1588230740 2018-07-02 07:34:28,439 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for 1588230740 2018-07-02 07:34:28,441 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-07-02 07:34:28,442 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for 1588230740 2018-07-02 07:34:28,444 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(982): Opened 1588230740; next sequenceid=2 2018-07-02 07:34:28,444 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for 1588230740 2018-07-02 07:34:28,447 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:meta,,1.1588230740 2018-07-02 07:34:28,452 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=2, pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,453 DEBUG [PEWorker-5] assignment.RegionTransitionProcedure(354): Finishing pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,454 INFO [PEWorker-5] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,455 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:meta,,1.1588230740 2018-07-02 07:34:28,458 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:meta,,1.1588230740 on asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:28,466 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/cluster2/meta-region-server 2018-07-02 07:34:28,661 INFO [PEWorker-5] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=1, state=RUNNABLE; InitMetaProcedure table=hbase:meta; resume parent processing. 2018-07-02 07:34:28,662 INFO [PEWorker-5] procedure2.ProcedureExecutor(1266): Finished pid=2, ppid=1, state=SUCCESS; AssignProcedure table=hbase:meta, region=1588230740 in 2.5970sec 2018-07-02 07:34:28,853 INFO [PEWorker-6] procedure2.ProcedureExecutor(1266): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 3.0060sec 2018-07-02 07:34:28,864 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:48385, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:34:28,925 INFO [Thread-409] master.HMaster(962): Master startup: status=Wait for region servers to report in, state=RUNNING, startTime=1530516865233, completionTime=-1 2018-07-02 07:34:28,926 INFO [Thread-409] master.ServerManager(854): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2018-07-02 07:34:28,926 DEBUG [Thread-409] assignment.AssignmentManager(1197): Joining cluster... 2018-07-02 07:34:28,931 INFO [Thread-409] assignment.AssignmentManager(1208): Number of RegionServers=3 2018-07-02 07:34:28,931 INFO [Thread-409] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1530516928931 2018-07-02 07:34:28,931 INFO [Thread-409] assignment.AssignmentManager(1216): Joined the cluster in 5msec 2018-07-02 07:34:28,937 INFO [Thread-409] master.TableNamespaceManager(96): Namespace table not found. Creating... 2018-07-02 07:34:28,937 INFO [Thread-409] master.HMaster(1886): Client=null/null create 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} 2018-07-02 07:34:29,111 DEBUG [Thread-409] procedure2.ProcedureExecutor(887): Stored pid=3, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2018-07-02 07:34:29,241 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:34:29,244 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:34:29,245 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741834_1010 size 476 2018-07-02 07:34:29,248 DEBUG [PEWorker-7] util.FSTableDescriptors(683): Wrote into hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2018-07-02 07:34:29,252 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6931): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/.tmp Table name == hbase:namespace 2018-07-02 07:34:29,271 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:29,272 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:29,272 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:34:29,273 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(829): Instantiated hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:29,276 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1527): Closing d1a74048f8e137b8647beefb747aafba, disabling compactions & flushes 2018-07-02 07:34:29,276 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1567): Updates disabled for region hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:29,276 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1681): Closed hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:29,380 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":1530516869379},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1530516869379}]},"ts":1530516869379} 2018-07-02 07:34:29,387 INFO [PEWorker-7] hbase.MetaTableAccessor(1528): Added 1 regions to meta. 2018-07-02 07:34:29,517 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516869517}]},"ts":1530516869517} 2018-07-02 07:34:29,522 INFO [PEWorker-7] hbase.MetaTableAccessor(1673): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2018-07-02 07:34:29,576 INFO [PEWorker-7] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba, target=asf911.gq1.ygridcore.net,33727,1530516865112}] 2018-07-02 07:34:29,664 INFO [PEWorker-8] procedure.MasterProcedureScheduler(697): pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba, target=asf911.gq1.ygridcore.net,33727,1530516865112 checking lock on d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:29,668 INFO [PEWorker-8] assignment.AssignProcedure(218): Starting pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba, target=asf911.gq1.ygridcore.net,33727,1530516865112; rit=OFFLINE, location=asf911.gq1.ygridcore.net,33727,1530516865112; forceNewPlan=false, retain=false 2018-07-02 07:34:29,818 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-07-02 07:34:29,821 INFO [PEWorker-9] assignment.RegionStateStore(199): pid=4 updating hbase:meta row=d1a74048f8e137b8647beefb747aafba, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:29,825 INFO [PEWorker-9] assignment.RegionTransitionProcedure(241): Dispatch pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba, target=asf911.gq1.ygridcore.net,33727,1530516865112; rit=OPENING, location=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:29,979 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=33727] regionserver.RSRpcServices(1983): Open hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:29,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => d1a74048f8e137b8647beefb747aafba, NAME => 'hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:34:29,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table namespace d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:29,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:29,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:29,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:29,996 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info 2018-07-02 07:34:29,997 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info 2018-07-02 07:34:29,997 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:29,998 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:29,999 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:29,999 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:30,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:30,004 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:30,004 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:30,005 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:30,007 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:30,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-07-02 07:34:30,013 INFO [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened d1a74048f8e137b8647beefb747aafba; next sequenceid=2 2018-07-02 07:34:30,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:34:30,017 INFO [PostOpenDeployTasks:d1a74048f8e137b8647beefb747aafba] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:30,021 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=2, pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba, target=asf911.gq1.ygridcore.net,33727,1530516865112; rit=OPENING, location=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:30,022 DEBUG [PEWorker-10] assignment.RegionTransitionProcedure(354): Finishing pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba, target=asf911.gq1.ygridcore.net,33727,1530516865112; rit=OPENING, location=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:30,022 DEBUG [PostOpenDeployTasks:d1a74048f8e137b8647beefb747aafba] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:30,022 INFO [PEWorker-10] assignment.RegionStateStore(199): pid=4 updating hbase:meta row=d1a74048f8e137b8647beefb747aafba, regionState=OPEN, openSeqNum=2, regionLocation=asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:30,024 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. on asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:30,144 INFO [PEWorker-10] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=3, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=hbase:namespace; resume parent processing. 2018-07-02 07:34:30,145 INFO [PEWorker-10] procedure2.ProcedureExecutor(1266): Finished pid=4, ppid=3, state=SUCCESS; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba, target=asf911.gq1.ygridcore.net,33727,1530516865112 in 456msec 2018-07-02 07:34:30,145 DEBUG [PEWorker-11] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516870144}]},"ts":1530516870144} 2018-07-02 07:34:30,149 INFO [PEWorker-11] hbase.MetaTableAccessor(1673): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2018-07-02 07:34:30,236 DEBUG [Thread-409] zookeeper.ZKUtil(357): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on znode that does not yet exist, /cluster2/namespace 2018-07-02 07:34:30,257 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/cluster2/namespace 2018-07-02 07:34:30,323 INFO [PEWorker-11] procedure2.ProcedureExecutor(1266): Finished pid=3, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 1.2900sec 2018-07-02 07:34:30,380 DEBUG [Thread-409] procedure2.ProcedureExecutor(887): Stored pid=5, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2018-07-02 07:34:30,724 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/namespace 2018-07-02 07:34:31,005 INFO [PEWorker-12] procedure2.ProcedureExecutor(1266): Finished pid=5, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 590msec 2018-07-02 07:34:31,221 DEBUG [Thread-409] procedure2.ProcedureExecutor(887): Stored pid=6, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2018-07-02 07:34:31,465 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/namespace 2018-07-02 07:34:31,597 INFO [PEWorker-13] procedure2.ProcedureExecutor(1266): Finished pid=6, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 528msec 2018-07-02 07:34:31,665 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/cluster2/namespace/default 2018-07-02 07:34:31,732 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/cluster2/namespace/hbase 2018-07-02 07:34:31,732 INFO [Thread-409] master.HMaster(1009): Master has completed initialization 6.466sec 2018-07-02 07:34:31,733 INFO [Thread-409] quotas.MasterQuotaManager(90): Quota support disabled 2018-07-02 07:34:31,733 INFO [Thread-409] zookeeper.ZKWatcher(205): not a secure deployment, proceeding 2018-07-02 07:34:31,739 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x4f12723e to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:31,739 DEBUG [Thread-409] master.HMaster(1067): Balancer post startup initialization complete, took 0 seconds 2018-07-02 07:34:31,793 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@722a1299, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:31,804 INFO [RS-EventLoopGroup-12-4] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:48414, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:34:31,810 INFO [Time-limited test] hbase.HBaseTestingUtility(1044): Minicluster is up; activeMaster=asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:31,833 INFO [RS-EventLoopGroup-3-5] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:43667, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2018-07-02 07:34:31,856 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.HMaster$3(1850): Client=jenkins//67.195.81.155 create 'SyncRep', {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '1', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-07-02 07:34:32,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] procedure2.ProcedureExecutor(887): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=SyncRep 2018-07-02 07:34:32,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:32,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:32,203 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:32,207 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741836_1012 size 475 2018-07-02 07:34:32,209 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741836_1012 size 475 2018-07-02 07:34:32,215 DEBUG [PEWorker-8] util.FSTableDescriptors(683): Wrote into hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/.tmp/data/default/SyncRep/.tabledesc/.tableinfo.0000000001 2018-07-02 07:34:32,220 INFO [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(6931): creating HRegion SyncRep HTD == 'SyncRep', {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '1', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/.tmp Table name == SyncRep 2018-07-02 07:34:32,249 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:32,249 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:32,252 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741837_1013 size 42 2018-07-02 07:34:32,254 DEBUG [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(829): Instantiated SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:32,256 DEBUG [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(1527): Closing fb68d1abb3b8182f9bd555d291e6d272, disabling compactions & flushes 2018-07-02 07:34:32,256 DEBUG [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(1567): Updates disabled for region SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:32,256 INFO [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(1681): Closed SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:32,373 DEBUG [PEWorker-8] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":1530516872373},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1530516872373}]},"ts":1530516872373} 2018-07-02 07:34:32,379 INFO [PEWorker-8] hbase.MetaTableAccessor(1528): Added 1 regions to meta. 2018-07-02 07:34:32,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:32,456 DEBUG [PEWorker-8] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"SyncRep","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516872455}]},"ts":1530516872455} 2018-07-02 07:34:32,460 INFO [PEWorker-8] hbase.MetaTableAccessor(1673): Updated tableName=SyncRep, state=ENABLING in hbase:meta 2018-07-02 07:34:32,525 INFO [PEWorker-8] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959}] 2018-07-02 07:34:32,592 INFO [PEWorker-9] procedure.MasterProcedureScheduler(697): pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959 checking lock on fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,595 INFO [PEWorker-9] assignment.AssignProcedure(218): Starting pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OFFLINE, location=asf911.gq1.ygridcore.net,38972,1530516853959; forceNewPlan=false, retain=false 2018-07-02 07:34:32,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:32,746 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-07-02 07:34:32,748 INFO [PEWorker-10] assignment.RegionStateStore(199): pid=8 updating hbase:meta row=fb68d1abb3b8182f9bd555d291e6d272, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:32,753 INFO [PEWorker-10] assignment.RegionTransitionProcedure(241): Dispatch pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:32,908 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=38972] regionserver.RSRpcServices(1983): Open SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:32,925 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => fb68d1abb3b8182f9bd555d291e6d272, NAME => 'SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:34:32,926 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table SyncRep fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,926 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:32,926 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,926 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,933 DEBUG [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/cf 2018-07-02 07:34:32,933 DEBUG [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/cf 2018-07-02 07:34:32,942 INFO [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] hfile.CacheConfig(239): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:32,942 INFO [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:32,943 INFO [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] regionserver.HStore(327): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:32,944 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,947 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,947 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,947 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,948 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,951 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,956 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-07-02 07:34:32,956 INFO [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened fb68d1abb3b8182f9bd555d291e6d272; next sequenceid=2 2018-07-02 07:34:32,956 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:32,975 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:34:32,975 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:34:32,975 INFO [PostOpenDeployTasks:fb68d1abb3b8182f9bd555d291e6d272] regionserver.HRegionServer(2193): Post open deploy tasks for SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:32,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=2, pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:32,979 DEBUG [PEWorker-2] assignment.RegionTransitionProcedure(354): Finishing pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:32,979 DEBUG [PostOpenDeployTasks:fb68d1abb3b8182f9bd555d291e6d272] regionserver.HRegionServer(2217): Finished post open deploy task for SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:32,980 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. on asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:32,981 INFO [PEWorker-2] assignment.RegionStateStore(199): pid=8 updating hbase:meta row=fb68d1abb3b8182f9bd555d291e6d272, regionState=OPEN, repBarrier=2, openSeqNum=2, regionLocation=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:33,106 INFO [PEWorker-2] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=SyncRep; resume parent processing. 2018-07-02 07:34:33,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1266): Finished pid=8, ppid=7, state=SUCCESS; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959 in 460msec 2018-07-02 07:34:33,107 DEBUG [PEWorker-11] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"SyncRep","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516873107}]},"ts":1530516873107} 2018-07-02 07:34:33,111 INFO [PEWorker-11] hbase.MetaTableAccessor(1673): Updated tableName=SyncRep, state=ENABLED in hbase:meta 2018-07-02 07:34:33,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:33,298 INFO [PEWorker-11] procedure2.ProcedureExecutor(1266): Finished pid=7, state=SUCCESS; CreateTableProcedure table=SyncRep in 1.3250sec 2018-07-02 07:34:33,950 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:34:34,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:34,202 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3668): Operation: CREATE, Table Name: default:SyncRep, procId: 7 completed 2018-07-02 07:34:34,222 INFO [RS-EventLoopGroup-9-5] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:48780, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2018-07-02 07:34:34,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster$3(1850): Client=jenkins//67.195.81.155 create 'SyncRep', {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '1', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-07-02 07:34:34,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] procedure2.ProcedureExecutor(887): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=SyncRep 2018-07-02 07:34:34,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:34,507 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:34:34,512 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741836_1012 size 475 2018-07-02 07:34:34,512 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741836_1012 size 475 2018-07-02 07:34:34,516 DEBUG [PEWorker-14] util.FSTableDescriptors(683): Wrote into hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/.tmp/data/default/SyncRep/.tabledesc/.tableinfo.0000000001 2018-07-02 07:34:34,520 INFO [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(6931): creating HRegion SyncRep HTD == 'SyncRep', {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '1', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/.tmp Table name == SyncRep 2018-07-02 07:34:34,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:34,543 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:34:34,550 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:34:34,551 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:34:34,552 DEBUG [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(829): Instantiated SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:34,553 DEBUG [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(1527): Closing 0f545ce4fc7475df98047cbbbf56ffee, disabling compactions & flushes 2018-07-02 07:34:34,553 DEBUG [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(1567): Updates disabled for region SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:34:34,553 INFO [RegionOpenAndInitThread-SyncRep-1] regionserver.HRegion(1681): Closed SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:34:34,617 DEBUG [PEWorker-14] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":1530516874617},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1530516874617}]},"ts":1530516874617} 2018-07-02 07:34:34,621 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:34:34,622 INFO [PEWorker-14] hbase.MetaTableAccessor(1528): Added 1 regions to meta. 2018-07-02 07:34:34,678 DEBUG [PEWorker-14] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"SyncRep","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516874678}]},"ts":1530516874678} 2018-07-02 07:34:34,681 INFO [PEWorker-14] hbase.MetaTableAccessor(1673): Updated tableName=SyncRep, state=ENABLING in hbase:meta 2018-07-02 07:34:34,719 INFO [PEWorker-14] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,38428,1530516865163}] 2018-07-02 07:34:34,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:34,815 INFO [PEWorker-15] procedure.MasterProcedureScheduler(697): pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,38428,1530516865163 checking lock on 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:34,819 INFO [PEWorker-15] assignment.AssignProcedure(218): Starting pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,38428,1530516865163; rit=OFFLINE, location=asf911.gq1.ygridcore.net,38428,1530516865163; forceNewPlan=false, retain=false 2018-07-02 07:34:34,969 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-07-02 07:34:34,973 INFO [PEWorker-16] assignment.RegionStateStore(199): pid=8 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:34,977 INFO [PEWorker-16] assignment.RegionTransitionProcedure(241): Dispatch pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,38428,1530516865163; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:35,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:35,131 DEBUG [RSProcedureDispatcher-pool13-t3] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:35,134 INFO [RS-EventLoopGroup-13-32] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:34208, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:34:35,134 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=38428] regionserver.RSRpcServices(1983): Open SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:34:35,143 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => 0f545ce4fc7475df98047cbbbf56ffee, NAME => 'SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:34:35,143 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table SyncRep 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,143 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:35,144 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,144 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,150 DEBUG [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/cf 2018-07-02 07:34:35,150 DEBUG [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/cf 2018-07-02 07:34:35,152 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] hfile.CacheConfig(239): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:35,153 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:35,154 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] regionserver.HStore(327): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:35,154 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,157 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,157 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,157 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,158 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,161 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,168 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-07-02 07:34:35,168 INFO [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened 0f545ce4fc7475df98047cbbbf56ffee; next sequenceid=2 2018-07-02 07:34:35,168 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:34:35,177 INFO [PostOpenDeployTasks:0f545ce4fc7475df98047cbbbf56ffee] regionserver.HRegionServer(2193): Post open deploy tasks for SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:34:35,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=2, pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,38428,1530516865163; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:35,186 DEBUG [PEWorker-2] assignment.RegionTransitionProcedure(354): Finishing pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,38428,1530516865163; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:35,186 DEBUG [PostOpenDeployTasks:0f545ce4fc7475df98047cbbbf56ffee] regionserver.HRegionServer(2217): Finished post open deploy task for SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:34:35,190 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. on asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:35,190 INFO [PEWorker-2] assignment.RegionStateStore(199): pid=8 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=OPEN, repBarrier=2, openSeqNum=2, regionLocation=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:35,349 INFO [PEWorker-2] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=SyncRep; resume parent processing. 2018-07-02 07:34:35,350 INFO [PEWorker-2] procedure2.ProcedureExecutor(1266): Finished pid=8, ppid=7, state=SUCCESS; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,38428,1530516865163 in 476msec 2018-07-02 07:34:35,350 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"SyncRep","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1530516875349}]},"ts":1530516875349} 2018-07-02 07:34:35,354 INFO [PEWorker-1] hbase.MetaTableAccessor(1673): Updated tableName=SyncRep, state=ENABLED in hbase:meta 2018-07-02 07:34:35,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:35,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1266): Finished pid=7, state=SUCCESS; CreateTableProcedure table=SyncRep in 1.2200sec 2018-07-02 07:34:36,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=7 2018-07-02 07:34:36,546 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3668): Operation: CREATE, Table Name: default:SyncRep, procId: 7 completed 2018-07-02 07:34:36,549 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:36,550 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:36,576 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.HMaster(3509): Client=jenkins//67.195.81.155 creating replication peer, id=1, config=clusterKey=localhost:59178:/cluster2,replicationEndpointImpl=null,replicateAllUserTables=false,tableCFs={SyncRep=null},bandwidth=0,serial=false,remoteWALDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/remoteWALs, state=ENABLED 2018-07-02 07:34:36,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] procedure2.ProcedureExecutor(887): Stored pid=9, state=RUNNABLE:PRE_PEER_MODIFICATION; org.apache.hadoop.hbase.master.replication.AddPeerProcedure 2018-07-02 07:34:36,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:36,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:36,919 INFO [PEWorker-12] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=11, ppid=9, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=12, ppid=9, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:37,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:37,192 DEBUG [RSProcedureDispatcher-pool3-t4] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:37,199 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ADD_PEER 2018-07-02 07:34:37,206 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ADD_PEER 2018-07-02 07:34:37,206 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:52459, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:34:37,208 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ADD_PEER 2018-07-02 07:34:37,270 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:37,270 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:37,277 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:37,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:37,437 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for ADD on asf911.gq1.ygridcore.net,38972,1530516853959 suceeded 2018-07-02 07:34:37,437 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for ADD on asf911.gq1.ygridcore.net,42768,1530516853889 suceeded 2018-07-02 07:34:37,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for ADD on asf911.gq1.ygridcore.net,46264,1530516853823 suceeded 2018-07-02 07:34:37,492 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x4487a649 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:37,502 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x60aeee92 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:37,507 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x27fd2076 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:37,520 INFO [PEWorker-4] procedure2.ProcedureExecutor(1266): Finished pid=10, ppid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 520msec 2018-07-02 07:34:37,533 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@7bd9e887, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:37,536 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:37,541 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@64fc31c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:37,542 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:37,549 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:37,550 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500019 connected 2018-07-02 07:34:37,551 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@44dfe4de, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:37,552 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:37,574 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:37,575 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b450001a connected 2018-07-02 07:34:37,607 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:37,609 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b450001b connected 2018-07-02 07:34:37,609 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 62bd510b-3b5c-46d2-af05-cbc0179a0f7b -> 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:37,609 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 62bd510b-3b5c-46d2-af05-cbc0179a0f7b -> 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:37,610 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 62bd510b-3b5c-46d2-af05-cbc0179a0f7b -> 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:37,621 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38972%2C1530516853959 2018-07-02 07:34:37,621 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C46264%2C1530516853823 2018-07-02 07:34:37,621 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C42768%2C1530516853889 2018-07-02 07:34:37,622 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:37,622 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:37,622 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:37,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1266): Finished pid=12, ppid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 524msec 2018-07-02 07:34:37,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:37,952 INFO [PEWorker-5] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=9, state=RUNNABLE:POST_PEER_MODIFICATION; org.apache.hadoop.hbase.master.replication.AddPeerProcedure; resume parent processing. 2018-07-02 07:34:37,952 INFO [PEWorker-6] replication.AddPeerProcedure(101): Successfully added ENABLED peer 1, config clusterKey=localhost:59178:/cluster2,replicationEndpointImpl=null,replicateAllUserTables=false,tableCFs={SyncRep=null},bandwidth=0,serial=false,remoteWALDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/remoteWALs 2018-07-02 07:34:37,953 INFO [PEWorker-5] procedure2.ProcedureExecutor(1266): Finished pid=11, ppid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 524msec 2018-07-02 07:34:38,010 INFO [PEWorker-6] procedure2.ProcedureExecutor(1266): Finished pid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.AddPeerProcedure in 1.3750sec 2018-07-02 07:34:38,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:38,869 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3509): Client=jenkins//67.195.81.155 creating replication peer, id=1, config=clusterKey=localhost:59178:/cluster1,replicationEndpointImpl=null,replicateAllUserTables=false,tableCFs={SyncRep=null},bandwidth=0,serial=false,remoteWALDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/remoteWALs, state=ENABLED 2018-07-02 07:34:38,975 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:34:39,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] procedure2.ProcedureExecutor(887): Stored pid=9, state=RUNNABLE:PRE_PEER_MODIFICATION; org.apache.hadoop.hbase.master.replication.AddPeerProcedure 2018-07-02 07:34:39,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:39,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:39,190 INFO [PEWorker-3] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=11, ppid=9, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=12, ppid=9, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:39,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:39,430 DEBUG [RSProcedureDispatcher-pool13-t4] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:39,440 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ADD_PEER 2018-07-02 07:34:39,441 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ADD_PEER 2018-07-02 07:34:39,441 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:45461, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:34:39,444 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ADD_PEER 2018-07-02 07:34:39,512 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:39,517 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:39,521 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:39,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:39,696 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for ADD on asf911.gq1.ygridcore.net,43014,1530516865056 suceeded 2018-07-02 07:34:39,700 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for ADD on asf911.gq1.ygridcore.net,38428,1530516865163 suceeded 2018-07-02 07:34:39,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for ADD on asf911.gq1.ygridcore.net,33727,1530516865112 suceeded 2018-07-02 07:34:39,755 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x2169695f to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:39,759 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x58554889 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:39,767 INFO [PEWorker-7] procedure2.ProcedureExecutor(1266): Finished pid=12, ppid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 509msec 2018-07-02 07:34:39,769 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x00ecba75 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:39,788 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@3074cf5c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:39,789 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:39,794 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@76b639f8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:39,795 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:39,801 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:39,803 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b450001f connected 2018-07-02 07:34:39,803 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@6059b5e8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:39,804 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:39,808 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:39,809 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500020 connected 2018-07-02 07:34:39,815 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:39,817 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500021 connected 2018-07-02 07:34:39,817 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:39,818 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C33727%2C1530516865112 2018-07-02 07:34:39,818 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:39,818 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:39,818 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:39,818 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38428%2C1530516865163 2018-07-02 07:34:39,818 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C43014%2C1530516865056 2018-07-02 07:34:39,818 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:39,818 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:39,940 INFO [PEWorker-8] procedure2.ProcedureExecutor(1266): Finished pid=10, ppid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 514msec 2018-07-02 07:34:40,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:40,190 INFO [PEWorker-9] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=9, state=RUNNABLE:POST_PEER_MODIFICATION; org.apache.hadoop.hbase.master.replication.AddPeerProcedure; resume parent processing. 2018-07-02 07:34:40,190 INFO [PEWorker-9] procedure2.ProcedureExecutor(1266): Finished pid=11, ppid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 514msec 2018-07-02 07:34:40,190 INFO [PEWorker-10] replication.AddPeerProcedure(101): Successfully added ENABLED peer 1, config clusterKey=localhost:59178:/cluster1,replicationEndpointImpl=null,replicateAllUserTables=false,tableCFs={SyncRep=null},bandwidth=0,serial=false,remoteWALDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/remoteWALs 2018-07-02 07:34:40,258 INFO [PEWorker-10] procedure2.ProcedureExecutor(1266): Finished pid=9, state=SUCCESS; org.apache.hadoop.hbase.master.replication.AddPeerProcedure in 1.3210sec 2018-07-02 07:34:40,649 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:34:41,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=9 2018-07-02 07:34:41,294 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:41,295 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:41,295 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:41,295 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:41,298 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:41,298 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:41,367 INFO [Time-limited test] hbase.ResourceChecker(148): before: replication.TestSyncReplicationStandbyKillRS#testStandbyKillRegionServer Thread=859, OpenFileDescriptor=3156, MaxFileDescriptor=60000, SystemLoadAverage=603, ProcessCount=271, AvailableMemoryMB=12980 2018-07-02 07:34:41,368 WARN [Time-limited test] hbase.ResourceChecker(135): Thread=859 is superior to 500 2018-07-02 07:34:41,368 WARN [Time-limited test] hbase.ResourceChecker(135): OpenFileDescriptor=3156 is superior to 1024 2018-07-02 07:34:41,376 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3574): Client=jenkins//67.195.81.155 transit current cluster state to STANDBY in a synchronous replication peer id=1 2018-07-02 07:34:41,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] procedure2.ProcedureExecutor(887): Stored pid=13, state=RUNNABLE:PRE_PEER_SYNC_REPLICATION_STATE_TRANSITION; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure 2018-07-02 07:34:41,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:41,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:41,756 INFO [asf911:38972Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38972%2C1530516853959]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959.1530516857838 at position: 346 2018-07-02 07:34:41,756 INFO [asf911:42768Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C42768%2C1530516853889]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,42768,1530516853889/asf911.gq1.ygridcore.net%2C42768%2C1530516853889.1530516857838 at position: -1 2018-07-02 07:34:41,756 INFO [asf911:46264Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46264%2C1530516853823]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,46264,1530516853823/asf911.gq1.ygridcore.net%2C46264%2C1530516853823.1530516857838 at position: 586 2018-07-02 07:34:41,816 INFO [PEWorker-11] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=15, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=16, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:41,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:42,040 INFO [asf911:38428Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38428%2C1530516865163]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 at position: 346 2018-07-02 07:34:42,040 INFO [asf911:43014Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C43014%2C1530516865056]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 at position: -1 2018-07-02 07:34:42,041 INFO [asf911:33727Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C33727%2C1530516865112]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516868070 at position: 586 2018-07-02 07:34:42,108 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:42,109 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:42,109 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:42,134 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:42,134 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:42,134 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:42,134 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(483): Terminate replication source for 1 2018-07-02 07:34:42,134 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(483): Terminate replication source for 1 2018-07-02 07:34:42,135 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(481): Closing source 1 because: Peer 1 state or config changed. Will close the previous replication source and open a new one 2018-07-02 07:34:42,134 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(483): Terminate replication source for 1 2018-07-02 07:34:42,135 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(481): Closing source 1 because: Peer 1 state or config changed. Will close the previous replication source and open a new one 2018-07-02 07:34:42,135 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(481): Closing source 1 because: Peer 1 state or config changed. Will close the previous replication source and open a new one 2018-07-02 07:34:42,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:42,265 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x2169695f to localhost:59178 2018-07-02 07:34:42,266 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:42,267 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1 terminated 2018-07-02 07:34:42,268 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(490): Startup replication source for 1 2018-07-02 07:34:42,274 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x58554889 to localhost:59178 2018-07-02 07:34:42,274 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x00ecba75 to localhost:59178 2018-07-02 07:34:42,274 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:42,274 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:42,274 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C38428%2C1530516865163,1 terminated 2018-07-02 07:34:42,274 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C43014%2C1530516865056,1 terminated 2018-07-02 07:34:42,275 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(490): Startup replication source for 1 2018-07-02 07:34:42,275 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(490): Startup replication source for 1 2018-07-02 07:34:42,283 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,33727,1530516865112 suceeded 2018-07-02 07:34:42,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,38428,1530516865163 suceeded 2018-07-02 07:34:42,291 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,43014,1530516865056 suceeded 2018-07-02 07:34:42,327 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x28e4daab to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:42,335 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x67eb8c0d to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:42,339 INFO [PEWorker-15] procedure2.ProcedureExecutor(1266): Finished pid=14, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 470msec 2018-07-02 07:34:42,345 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x35169b31 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:42,358 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@3249b074, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:42,358 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@14cf73d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:42,358 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@756c4c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:42,358 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:42,358 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:42,358 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:42,374 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:42,375 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500025 connected 2018-07-02 07:34:42,382 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:42,388 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500026 connected 2018-07-02 07:34:42,390 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:42,392 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500027 connected 2018-07-02 07:34:42,394 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:42,394 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:42,394 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:42,394 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C33727%2C1530516865112 2018-07-02 07:34:42,394 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38428%2C1530516865163 2018-07-02 07:34:42,394 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:42,394 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C43014%2C1530516865056 2018-07-02 07:34:42,394 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:42,395 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:42,488 INFO [PEWorker-2] procedure2.ProcedureExecutor(1266): Finished pid=16, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 480msec 2018-07-02 07:34:42,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:42,688 INFO [PEWorker-16] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=13, state=RUNNABLE:REMOVE_ALL_REPLICATION_QUEUES_IN_PEER; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure; resume parent processing. 2018-07-02 07:34:42,688 INFO [PEWorker-16] procedure2.ProcedureExecutor(1266): Finished pid=15, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 479msec 2018-07-02 07:34:42,691 DEBUG [PEWorker-1] zookeeper.ZKUtil(355): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/replication/rs/asf911.gq1.ygridcore.net,43014,1530516865056/1/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 2018-07-02 07:34:42,707 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,43014,1530516865056/1/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 2018-07-02 07:34:42,707 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,43014,1530516865056/1 2018-07-02 07:34:42,707 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,43014,1530516865056/1 2018-07-02 07:34:42,716 DEBUG [PEWorker-1] zookeeper.ZKUtil(355): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163/1/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 2018-07-02 07:34:42,723 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163/1/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 2018-07-02 07:34:42,723 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163/1 2018-07-02 07:34:42,724 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163/1 2018-07-02 07:34:42,733 DEBUG [PEWorker-1] zookeeper.ZKUtil(355): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112/1/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516868070 2018-07-02 07:34:42,762 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112/1/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516868070 2018-07-02 07:34:42,763 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112/1 2018-07-02 07:34:42,763 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112/1 2018-07-02 07:34:42,873 INFO [PEWorker-1] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=18, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=19, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:43,075 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:43,075 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:43,075 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:43,104 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:43,104 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:43,104 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSourceManager(423): Terminate replication source for 1 2018-07-02 07:34:43,104 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(481): Closing source 1 because: Sync replication peer 1 is transiting to STANDBY. Will close the previous replication source and open a new one 2018-07-02 07:34:43,104 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSourceManager(423): Terminate replication source for 1 2018-07-02 07:34:43,104 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(481): Closing source 1 because: Sync replication peer 1 is transiting to STANDBY. Will close the previous replication source and open a new one 2018-07-02 07:34:43,106 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:43,106 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSourceManager(423): Terminate replication source for 1 2018-07-02 07:34:43,106 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(481): Closing source 1 because: Sync replication peer 1 is transiting to STANDBY. Will close the previous replication source and open a new one 2018-07-02 07:34:43,224 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x28e4daab to localhost:59178 2018-07-02 07:34:43,227 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x35169b31 to localhost:59178 2018-07-02 07:34:43,228 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:43,227 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x67eb8c0d to localhost:59178 2018-07-02 07:34:43,228 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C43014%2C1530516865056,1 terminated 2018-07-02 07:34:43,228 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:43,228 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:43,228 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C38428%2C1530516865163,1 terminated 2018-07-02 07:34:43,229 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1 terminated 2018-07-02 07:34:43,229 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSourceManager(434): Startup replication source for 1 2018-07-02 07:34:43,229 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSourceManager(434): Startup replication source for 1 2018-07-02 07:34:43,230 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.ReplicationSourceManager(434): Startup replication source for 1 2018-07-02 07:34:43,240 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,43014,1530516865056/1/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 already deleted, retry=false 2018-07-02 07:34:43,240 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163/1/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 already deleted, retry=false 2018-07-02 07:34:43,241 WARN [RS_REFRESH_PEER-regionserver/asf911:0-0] replication.ZKReplicationQueueStorage(200): /cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163/1/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 already deleted when removing log 2018-07-02 07:34:43,240 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112/1/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516868070 already deleted, retry=false 2018-07-02 07:34:43,241 WARN [RS_REFRESH_PEER-regionserver/asf911:0-0] replication.ZKReplicationQueueStorage(200): /cluster2/replication/rs/asf911.gq1.ygridcore.net,43014,1530516865056/1/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 already deleted when removing log 2018-07-02 07:34:43,241 WARN [RS_REFRESH_PEER-regionserver/asf911:0-0] replication.ZKReplicationQueueStorage(200): /cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112/1/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516868070 already deleted when removing log 2018-07-02 07:34:43,250 DEBUG [regionserver/asf911:0.logRoller] regionserver.LogRoller(178): WAL roll requested 2018-07-02 07:34:43,250 DEBUG [regionserver/asf911:0.logRoller] regionserver.LogRoller(178): WAL roll requested 2018-07-02 07:34:43,250 DEBUG [regionserver/asf911:0.logRoller] regionserver.LogRoller(178): WAL roll requested 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-5924c3e7-0126-4318-ab71-97788504e4c7,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-22] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-21] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-4] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-12] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-17] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-13] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-18] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK] 2018-07-02 07:34:43,264 DEBUG [RS-EventLoopGroup-13-9] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:34:43,305 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x44ffd2cc to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:43,305 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x5496e3df to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:43,306 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x5b3869c3 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:43,357 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@7f247bfc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:43,358 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:43,365 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@61754cde, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:43,366 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@20f52f3d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:43,366 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:43,366 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:43,374 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:43,377 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b450002b connected 2018-07-02 07:34:43,378 INFO [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(682): Rolled WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250 2018-07-02 07:34:43,378 INFO [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(682): Rolled WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516868070 with entries=3, filesize=586 B; new WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 2018-07-02 07:34:43,378 DEBUG [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK], DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK]] 2018-07-02 07:34:43,378 DEBUG [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:49540,DS-5924c3e7-0126-4318-ab71-97788504e4c7,DISK], DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK]] 2018-07-02 07:34:43,379 INFO [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(663): Archiving hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516868056 2018-07-02 07:34:43,389 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:43,392 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b450002c connected 2018-07-02 07:34:43,392 INFO [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(682): Rolled WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 with entries=1, filesize=346 B; new WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 2018-07-02 07:34:43,392 DEBUG [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK], DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK]] 2018-07-02 07:34:43,393 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:43,393 INFO [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(663): Archiving hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516868070 2018-07-02 07:34:43,393 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:43,393 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38428%2C1530516865163 2018-07-02 07:34:43,396 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:43,395 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b450002d connected 2018-07-02 07:34:43,397 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:43,397 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:43,397 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C43014%2C1530516865056 2018-07-02 07:34:43,397 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C33727%2C1530516865112 2018-07-02 07:34:43,397 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:43,397 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:43,408 DEBUG [RS-EventLoopGroup-13-23] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:34:43,409 DEBUG [RS-EventLoopGroup-13-24] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:34:43,409 DEBUG [RS-EventLoopGroup-13-20] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK] 2018-07-02 07:34:43,410 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:34:43,410 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:34:43,413 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741832_1008 size 594 2018-07-02 07:34:43,417 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741830_1006 size 91 2018-07-02 07:34:43,418 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741830_1006 size 91 2018-07-02 07:34:43,420 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:34:43,420 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741832_1008 size 594 2018-07-02 07:34:43,421 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:34:43,422 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741831_1007 size 354 2018-07-02 07:34:43,426 INFO [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(682): Rolled WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.meta.1530516868381.meta with entries=11, filesize=3.29 KB; new WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.meta.1530516883379.meta 2018-07-02 07:34:43,426 DEBUG [regionserver/asf911:0.logRoller] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK]] 2018-07-02 07:34:43,432 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:43,432 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:43,433 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,38428,1530516865163 suceeded 2018-07-02 07:34:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,33727,1530516865112 suceeded 2018-07-02 07:34:43,454 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,43014,1530516865056 suceeded 2018-07-02 07:34:43,523 INFO [PEWorker-6] procedure2.ProcedureExecutor(1266): Finished pid=18, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 581msec 2018-07-02 07:34:43,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:43,681 INFO [PEWorker-7] procedure2.ProcedureExecutor(1266): Finished pid=19, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 594msec 2018-07-02 07:34:43,903 INFO [PEWorker-8] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=13, state=RUNNABLE:SYNC_REPLICATION_SET_PEER_ENABLED; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure; resume parent processing. 2018-07-02 07:34:43,903 INFO [PEWorker-8] procedure2.ProcedureExecutor(1266): Finished pid=17, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 595msec 2018-07-02 07:34:43,989 INFO [PEWorker-9] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=21, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=22, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:44,199 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ENABLE_PEER 2018-07-02 07:34:44,199 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ENABLE_PEER 2018-07-02 07:34:44,199 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=ENABLE_PEER 2018-07-02 07:34:44,201 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for ENABLE on asf911.gq1.ygridcore.net,38428,1530516865163 suceeded 2018-07-02 07:34:44,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for ENABLE on asf911.gq1.ygridcore.net,33727,1530516865112 suceeded 2018-07-02 07:34:44,201 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for ENABLE on asf911.gq1.ygridcore.net,43014,1530516865056 suceeded 2018-07-02 07:34:44,208 INFO [PEWorker-14] procedure2.ProcedureExecutor(1266): Finished pid=21, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 214msec 2018-07-02 07:34:44,443 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:44,444 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:44,444 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:44,444 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:44,444 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:34:44,444 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:34:44,465 INFO [PEWorker-12] procedure2.ProcedureExecutor(1266): Finished pid=20, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 216msec 2018-07-02 07:34:44,465 INFO [PEWorker-11] procedure2.ProcedureExecutor(1266): Finished pid=21, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 214msec 2018-07-02 07:34:44,540 INFO [PEWorker-13] procedure2.ProcedureExecutor(1266): Finished pid=20, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 216msec 2018-07-02 07:34:44,540 INFO [PEWorker-10] procedure2.ProcedureExecutor(1266): Finished pid=22, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 219msec 2018-07-02 07:34:44,540 INFO [PEWorker-15] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=13, state=RUNNABLE:CREATE_DIR_FOR_REMOTE_WAL; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure; resume parent processing. 2018-07-02 07:34:44,540 INFO [PEWorker-15] procedure2.ProcedureExecutor(1266): Finished pid=22, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 219msec 2018-07-02 07:34:44,611 INFO [PEWorker-13] replication.TransitPeerSyncReplicationStateProcedure(132): Successfully transit current cluster state from DOWNGRADE_ACTIVE to STANDBY for sync replication peer 1 2018-07-02 07:34:44,673 INFO [PEWorker-13] procedure2.ProcedureExecutor(1266): Finished pid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure in 3.2330sec 2018-07-02 07:34:45,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:45,679 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.HMaster(3574): Client=jenkins//67.195.81.155 transit current cluster state to ACTIVE in a synchronous replication peer id=1 2018-07-02 07:34:45,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] procedure2.ProcedureExecutor(887): Stored pid=13, state=RUNNABLE:PRE_PEER_SYNC_REPLICATION_STATE_TRANSITION; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure 2018-07-02 07:34:45,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:45,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:46,012 INFO [PEWorker-16] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=15, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=16, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:46,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:46,251 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:46,256 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:46,256 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,46264,1530516853823 suceeded 2018-07-02 07:34:46,257 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:46,260 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,38972,1530516853959 suceeded 2018-07-02 07:34:46,262 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,42768,1530516853889 suceeded 2018-07-02 07:34:46,323 INFO [PEWorker-9] procedure2.ProcedureExecutor(1266): Finished pid=16, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 247msec 2018-07-02 07:34:46,461 WARN [RS:2;asf911:38972] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:46,461 INFO [RS:2;asf911:38972] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1, suffix=.syncrep, logDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959, archiveDir=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:46,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:46,467 DEBUG [RS-EventLoopGroup-6-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:48785,DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee,DISK] 2018-07-02 07:34:46,467 DEBUG [RS-EventLoopGroup-6-7] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45556,DS-fb979981-ad7d-4df7-af08-69017228b672,DISK] 2018-07-02 07:34:46,467 DEBUG [RS-EventLoopGroup-6-17] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33954,DS-137fa992-0531-460e-8da1-5d0327e9db5c,DISK] 2018-07-02 07:34:46,475 INFO [PEWorker-10] procedure2.ProcedureExecutor(1266): Finished pid=14, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 250msec 2018-07-02 07:34:46,485 DEBUG [RS-EventLoopGroup-6-10] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK] 2018-07-02 07:34:46,485 DEBUG [RS-EventLoopGroup-6-8] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-5924c3e7-0126-4318-ab71-97788504e4c7,DISK] 2018-07-02 07:34:46,486 DEBUG [RS-EventLoopGroup-6-21] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:34:46,507 DEBUG [RS:2;asf911:38972] regionserver.ReplicationSourceManager(773): Start tracking logs for wal group asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1 for peer 1 2018-07-02 07:34:46,507 INFO [RS:2;asf911:38972] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep 2018-07-02 07:34:46,508 DEBUG [RS:2;asf911:38972] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1 2018-07-02 07:34:46,508 INFO [RS:2;asf911:38972] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:46,508 DEBUG [RS:2;asf911:38972] wal.AbstractFSWAL(775): Create new DualAsyncFSWAL writer with pipeline: [] 2018-07-02 07:34:46,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=13, state=RUNNABLE:REOPEN_ALL_REGIONS_IN_PEER; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure; resume parent processing. 2018-07-02 07:34:46,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1266): Finished pid=15, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 252msec 2018-07-02 07:34:46,738 INFO [PEWorker-11] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE:REOPEN_TABLE_REGIONS_GET_REGIONS; ReopenTableRegionsProcedure table=SyncRep}] 2018-07-02 07:34:46,756 INFO [asf911:46264Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46264%2C1530516853823]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,46264,1530516853823/asf911.gq1.ygridcore.net%2C46264%2C1530516853823.1530516857838 at position: 586 2018-07-02 07:34:46,756 INFO [asf911:38972Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep at position: -1 walGroup [asf911.gq1.ygridcore.net%2C38972%2C1530516853959]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959.1530516857838 at position: 346 2018-07-02 07:34:46,756 INFO [asf911:42768Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C42768%2C1530516853889]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,42768,1530516853889/asf911.gq1.ygridcore.net%2C42768%2C1530516853889.1530516857838 at position: -1 2018-07-02 07:34:46,902 INFO [PEWorker-12] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure hri=fb68d1abb3b8182f9bd555d291e6d272, source=asf911.gq1.ygridcore.net,38972,1530516853959, destination=asf911.gq1.ygridcore.net,38972,1530516853959}] 2018-07-02 07:34:46,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:46,982 INFO [PEWorker-13] procedure.MasterProcedureScheduler(697): pid=18, ppid=17, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure hri=fb68d1abb3b8182f9bd555d291e6d272, source=asf911.gq1.ygridcore.net,38972,1530516853959, destination=asf911.gq1.ygridcore.net,38972,1530516853959 checking lock on fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:46,983 INFO [PEWorker-13] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, server=asf911.gq1.ygridcore.net,38972,1530516853959}] 2018-07-02 07:34:47,040 INFO [asf911:38428Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38428%2C1530516865163]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 at position: -1 2018-07-02 07:34:47,040 INFO [asf911:43014Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C43014%2C1530516865056]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250 at position: -1 2018-07-02 07:34:47,041 INFO [asf911:33727Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C33727%2C1530516865112]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 at position: -1 2018-07-02 07:34:47,076 INFO [PEWorker-13] procedure.MasterProcedureScheduler(697): pid=19, ppid=18, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, server=asf911.gq1.ygridcore.net,38972,1530516853959 checking lock on fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,076 INFO [PEWorker-13] assignment.RegionStateStore(199): pid=19 updating hbase:meta row=fb68d1abb3b8182f9bd555d291e6d272, regionState=CLOSING, regionLocation=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,081 INFO [PEWorker-13] assignment.RegionTransitionProcedure(241): Dispatch pid=19, ppid=18, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, server=asf911.gq1.ygridcore.net,38972,1530516853959; rit=CLOSING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,236 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=38972] regionserver.RSRpcServices(1607): Close fb68d1abb3b8182f9bd555d291e6d272, moving to asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,240 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing fb68d1abb3b8182f9bd555d291e6d272, disabling compactions & flushes 2018-07-02 07:34:47,240 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:47,251 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-07-02 07:34:47,256 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:47,257 WARN [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegionServer(3423): Not adding moved region record: fb68d1abb3b8182f9bd555d291e6d272 to self. 2018-07-02 07:34:47,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] assignment.RegionTransitionProcedure(264): Received report CLOSED seqId=-1, pid=19, ppid=18, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, server=asf911.gq1.ygridcore.net,38972,1530516853959; rit=CLOSING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,259 DEBUG [PEWorker-15] assignment.RegionTransitionProcedure(354): Finishing pid=19, ppid=18, state=RUNNABLE:REGION_TRANSITION_FINISH; UnassignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, server=asf911.gq1.ygridcore.net,38972,1530516853959; rit=CLOSING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,260 INFO [PEWorker-15] assignment.RegionStateStore(199): pid=19 updating hbase:meta row=fb68d1abb3b8182f9bd555d291e6d272, regionState=CLOSED 2018-07-02 07:34:47,260 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:47,418 INFO [PEWorker-15] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=18, ppid=17, state=RUNNABLE:MOVE_REGION_ASSIGN; MoveRegionProcedure hri=fb68d1abb3b8182f9bd555d291e6d272, source=asf911.gq1.ygridcore.net,38972,1530516853959, destination=asf911.gq1.ygridcore.net,38972,1530516853959; resume parent processing. 2018-07-02 07:34:47,419 INFO [PEWorker-4] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=20, ppid=18, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959}] 2018-07-02 07:34:47,419 INFO [PEWorker-15] procedure2.ProcedureExecutor(1266): Finished pid=19, ppid=18, state=SUCCESS; UnassignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, server=asf911.gq1.ygridcore.net,38972,1530516853959 in 280msec 2018-07-02 07:34:47,489 INFO [PEWorker-4] procedure.MasterProcedureScheduler(697): pid=20, ppid=18, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959 checking lock on fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,492 INFO [PEWorker-4] assignment.AssignProcedure(218): Starting pid=20, ppid=18, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OFFLINE, location=asf911.gq1.ygridcore.net,38972,1530516853959; forceNewPlan=false, retain=false 2018-07-02 07:34:47,643 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-07-02 07:34:47,643 INFO [PEWorker-5] assignment.RegionStateStore(199): pid=20 updating hbase:meta row=fb68d1abb3b8182f9bd555d291e6d272, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,647 INFO [PEWorker-5] assignment.RegionTransitionProcedure(241): Dispatch pid=20, ppid=18, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,770 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:34:47,800 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=38972] regionserver.RSRpcServices(1983): Open SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:47,807 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(7108): Opening region: {ENCODED => fb68d1abb3b8182f9bd555d291e6d272, NAME => 'SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:34:47,807 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table SyncRep fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,808 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(829): Instantiated SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:34:47,808 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(7148): checking encryption for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,808 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(7153): checking classloading for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,813 DEBUG [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/cf 2018-07-02 07:34:47,814 DEBUG [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/cf 2018-07-02 07:34:47,815 INFO [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] hfile.CacheConfig(239): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:47,815 INFO [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:34:47,816 INFO [StoreOpener-fb68d1abb3b8182f9bd555d291e6d272-1] regionserver.HStore(327): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:34:47,817 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(925): replaying wal for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,819 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,819 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(933): stopping wal replay for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,819 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(945): Cleaning up temporary data for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,833 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(956): Cleaning up detritus for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,837 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(978): writing seq id for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,837 INFO [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(982): Opened fb68d1abb3b8182f9bd555d291e6d272; next sequenceid=5 2018-07-02 07:34:47,838 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] regionserver.HRegion(989): Running coprocessor post-open hooks for fb68d1abb3b8182f9bd555d291e6d272 2018-07-02 07:34:47,857 INFO [PostOpenDeployTasks:fb68d1abb3b8182f9bd555d291e6d272] regionserver.HRegionServer(2193): Post open deploy tasks for SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:47,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=5, pid=20, ppid=18, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,859 DEBUG [PEWorker-6] assignment.RegionTransitionProcedure(354): Finishing pid=20, ppid=18, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959; rit=OPENING, location=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,859 INFO [PEWorker-6] assignment.RegionStateStore(199): pid=20 updating hbase:meta row=fb68d1abb3b8182f9bd555d291e6d272, regionState=OPEN, repBarrier=5, openSeqNum=5, regionLocation=asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,859 DEBUG [PostOpenDeployTasks:fb68d1abb3b8182f9bd555d291e6d272] regionserver.HRegionServer(2217): Finished post open deploy task for SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:47,863 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-1] handler.OpenRegionHandler(128): Opened SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. on asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:47,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:48,035 INFO [PEWorker-6] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=18, ppid=17, state=RUNNABLE; MoveRegionProcedure hri=fb68d1abb3b8182f9bd555d291e6d272, source=asf911.gq1.ygridcore.net,38972,1530516853959, destination=asf911.gq1.ygridcore.net,38972,1530516853959; resume parent processing. 2018-07-02 07:34:48,035 INFO [PEWorker-6] procedure2.ProcedureExecutor(1266): Finished pid=20, ppid=18, state=SUCCESS; AssignProcedure table=SyncRep, region=fb68d1abb3b8182f9bd555d291e6d272, target=asf911.gq1.ygridcore.net,38972,1530516853959 in 452msec 2018-07-02 07:34:48,189 INFO [PEWorker-8] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=17, ppid=13, state=RUNNABLE:REOPEN_TABLE_REGIONS_CONFIRM_REOPENED; ReopenTableRegionsProcedure table=SyncRep; resume parent processing. 2018-07-02 07:34:48,190 INFO [PEWorker-8] procedure2.ProcedureExecutor(1266): Finished pid=18, ppid=17, state=SUCCESS; MoveRegionProcedure hri=fb68d1abb3b8182f9bd555d291e6d272, source=asf911.gq1.ygridcore.net,38972,1530516853959, destination=asf911.gq1.ygridcore.net,38972,1530516853959 in 1.1340sec 2018-07-02 07:34:48,445 INFO [PEWorker-16] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=13, state=RUNNABLE:TRANSIT_PEER_NEW_SYNC_REPLICATION_STATE; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure; resume parent processing. 2018-07-02 07:34:48,445 INFO [PEWorker-16] procedure2.ProcedureExecutor(1266): Finished pid=17, ppid=13, state=SUCCESS; ReopenTableRegionsProcedure table=SyncRep in 1.4540sec 2018-07-02 07:34:48,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=21, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=22, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=23, ppid=13, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:48,804 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:48,804 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:48,805 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,42768,1530516853889 suceeded 2018-07-02 07:34:48,812 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,46264,1530516853823 suceeded 2018-07-02 07:34:48,812 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:48,815 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,38972,1530516853959 suceeded 2018-07-02 07:34:48,819 INFO [PEWorker-9] procedure2.ProcedureExecutor(1266): Finished pid=22, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 226msec 2018-07-02 07:34:48,970 INFO [PEWorker-11] procedure2.ProcedureExecutor(1266): Finished pid=23, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 215msec 2018-07-02 07:34:48,970 INFO [PEWorker-2] procedure2.ProcedureExecutor(1266): Finished pid=22, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 226msec 2018-07-02 07:34:49,376 INFO [PEWorker-7] procedure2.ProcedureExecutor(1266): Finished pid=23, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 215msec 2018-07-02 07:34:49,376 INFO [PEWorker-14] replication.TransitPeerSyncReplicationStateProcedure(132): Successfully transit current cluster state from DOWNGRADE_ACTIVE to ACTIVE for sync replication peer 1 2018-07-02 07:34:49,376 INFO [PEWorker-12] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=13, state=RUNNABLE:POST_PEER_SYNC_REPLICATION_STATE_TRANSITION; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure; resume parent processing. 2018-07-02 07:34:49,376 INFO [PEWorker-10] procedure2.ProcedureExecutor(1266): Finished pid=21, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 234msec 2018-07-02 07:34:49,377 INFO [PEWorker-12] procedure2.ProcedureExecutor(1266): Finished pid=21, ppid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 234msec 2018-07-02 07:34:49,501 INFO [PEWorker-14] procedure2.ProcedureExecutor(1266): Finished pid=13, state=SUCCESS; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure in 3.6970sec 2018-07-02 07:34:49,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=13 2018-07-02 07:34:49,975 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.HMaster(3528): Client=jenkins//67.195.81.155 disable replication peer, id=1 2018-07-02 07:34:50,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] procedure2.ProcedureExecutor(887): Stored pid=24, state=RUNNABLE:PRE_PEER_MODIFICATION; org.apache.hadoop.hbase.master.replication.DisablePeerProcedure 2018-07-02 07:34:50,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=24 2018-07-02 07:34:50,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=24 2018-07-02 07:34:50,359 INFO [PEWorker-13] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=25, ppid=24, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=26, ppid=24, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=27, ppid=24, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:50,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=24 2018-07-02 07:34:50,619 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=DISABLE_PEER 2018-07-02 07:34:50,619 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=DISABLE_PEER 2018-07-02 07:34:50,621 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=DISABLE_PEER 2018-07-02 07:34:50,643 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:50,643 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(483): Terminate replication source for 1 2018-07-02 07:34:50,643 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(481): Closing source 1 because: Peer 1 state or config changed. Will close the previous replication source and open a new one 2018-07-02 07:34:50,646 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:50,646 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(483): Terminate replication source for 1 2018-07-02 07:34:50,646 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(481): Closing source 1 because: Peer 1 state or config changed. Will close the previous replication source and open a new one 2018-07-02 07:34:50,656 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:50,656 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(483): Terminate replication source for 1 2018-07-02 07:34:50,656 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(481): Closing source 1 because: Peer 1 state or config changed. Will close the previous replication source and open a new one 2018-07-02 07:34:50,774 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x60aeee92 to localhost:59178 2018-07-02 07:34:50,775 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:50,775 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C46264%2C1530516853823,1 terminated 2018-07-02 07:34:50,775 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(490): Startup replication source for 1 2018-07-02 07:34:50,776 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for DISABLE on asf911.gq1.ygridcore.net,46264,1530516853823 suceeded 2018-07-02 07:34:50,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=24 2018-07-02 07:34:50,798 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x4487a649 to localhost:59178 2018-07-02 07:34:50,799 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:50,799 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C42768%2C1530516853889,1 terminated 2018-07-02 07:34:50,799 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(490): Startup replication source for 1 2018-07-02 07:34:50,800 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for DISABLE on asf911.gq1.ygridcore.net,42768,1530516853889 suceeded 2018-07-02 07:34:50,843 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x02558922 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:50,869 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x7369b708 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:50,890 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x27fd2076 to localhost:59178 2018-07-02 07:34:50,890 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:50,891 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(527): ReplicationSourceWorker RS:2;asf911:38972.replicationSource.shipperasf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1,1 terminated 2018-07-02 07:34:50,891 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C38972%2C1530516853959,1 terminated 2018-07-02 07:34:50,891 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1] regionserver.ReplicationSourceManager(490): Startup replication source for 1 2018-07-02 07:34:50,892 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] replication.RefreshPeerProcedure(148): Refresh peer 1 for DISABLE on asf911.gq1.ygridcore.net,38972,1530516853959 suceeded 2018-07-02 07:34:50,905 INFO [PEWorker-5] procedure2.ProcedureExecutor(1266): Finished pid=27, ppid=24, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 424msec 2018-07-02 07:34:50,930 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x5fa86c83 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:50,949 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@419c6622, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:50,949 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@1aa8c5aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:50,949 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:50,949 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:51,015 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:51,015 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:51,018 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500031 connected 2018-07-02 07:34:51,020 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500032 connected 2018-07-02 07:34:51,021 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@4c52e28, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:51,021 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(448): Replicating 62bd510b-3b5c-46d2-af05-cbc0179a0f7b -> 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:51,021 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(448): Replicating 62bd510b-3b5c-46d2-af05-cbc0179a0f7b -> 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:51,021 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C46264%2C1530516853823 2018-07-02 07:34:51,021 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C42768%2C1530516853889 2018-07-02 07:34:51,021 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:51,021 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:51,021 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:51,074 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:51,076 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500033 connected 2018-07-02 07:34:51,077 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(448): Replicating 62bd510b-3b5c-46d2-af05-cbc0179a0f7b -> 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:51,077 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1 2018-07-02 07:34:51,077 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:51,077 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38972%2C1530516853959 2018-07-02 07:34:51,077 INFO [RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:51,157 INFO [PEWorker-6] procedure2.ProcedureExecutor(1266): Finished pid=26, ppid=24, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 445msec 2018-07-02 07:34:51,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=24 2018-07-02 07:34:51,405 INFO [PEWorker-8] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=24, state=RUNNABLE:POST_PEER_MODIFICATION; org.apache.hadoop.hbase.master.replication.DisablePeerProcedure; resume parent processing. 2018-07-02 07:34:51,405 INFO [PEWorker-8] procedure2.ProcedureExecutor(1266): Finished pid=25, ppid=24, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 536msec 2018-07-02 07:34:51,405 INFO [PEWorker-8] replication.DisablePeerProcedure(67): Successfully disabled peer 1 2018-07-02 07:34:51,497 INFO [PEWorker-8] procedure2.ProcedureExecutor(1266): Finished pid=24, state=SUCCESS; org.apache.hadoop.hbase.master.replication.DisablePeerProcedure in 1.4300sec 2018-07-02 07:34:51,756 INFO [asf911:46264Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46264%2C1530516853823]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,46264,1530516853823/asf911.gq1.ygridcore.net%2C46264%2C1530516853823.1530516857838 at position: -1 2018-07-02 07:34:51,756 INFO [asf911:42768Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C42768%2C1530516853889]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,42768,1530516853889/asf911.gq1.ygridcore.net%2C42768%2C1530516853889.1530516857838 at position: -1 2018-07-02 07:34:51,756 INFO [asf911:38972Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep at position: -1 walGroup [asf911.gq1.ygridcore.net%2C38972%2C1530516853959]: currently replicating from: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959.1530516857838 at position: -1 2018-07-02 07:34:52,040 INFO [asf911:43014Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C43014%2C1530516865056]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250 at position: -1 2018-07-02 07:34:52,040 INFO [asf911:38428Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38428%2C1530516865163]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 at position: -1 2018-07-02 07:34:52,041 INFO [asf911:33727Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C33727%2C1530516865112]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 at position: -1 2018-07-02 07:34:52,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=51263] master.MasterRpcServices(1144): Checking to see if procedure is done pid=24 2018-07-02 07:34:52,759 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:34:55,246 INFO [Time-limited test] hbase.HBaseTestingUtility(1096): Shutting down minicluster 2018-07-02 07:34:55,246 INFO [Time-limited test] client.ConnectionImplementation(1766): Closing master protocol: MasterService 2018-07-02 07:34:55,247 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x15d8f6fc to localhost:59178 2018-07-02 07:34:55,247 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,247 DEBUG [Time-limited test] util.JVMClusterUtil(238): Shutting down HBase Cluster 2018-07-02 07:34:55,248 INFO [Time-limited test] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,39498,1530516852236' ***** 2018-07-02 07:34:55,248 INFO [Time-limited test] regionserver.HRegionServer(2168): STOPPED: Stopped by Time-limited test 2018-07-02 07:34:55,252 INFO [M:0;asf911:39498] zookeeper.ReadOnlyZKClient(139): Connect 0x29772115 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:55,258 INFO [Time-limited test] master.ServerManager(916): Cluster shutdown requested of master=asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:55,266 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:55,266 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:55,266 INFO [Time-limited test] procedure2.ProcedureExecutor(592): Stopping 2018-07-02 07:34:55,266 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:55,267 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x3128d54d to localhost:59178 2018-07-02 07:34:55,266 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:55,267 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,267 INFO [Time-limited test] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,46264,1530516853823' ***** 2018-07-02 07:34:55,267 INFO [Time-limited test] regionserver.HRegionServer(2168): STOPPED: Shutdown requested 2018-07-02 07:34:55,268 INFO [Time-limited test] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,42768,1530516853889' ***** 2018-07-02 07:34:55,268 INFO [Time-limited test] regionserver.HRegionServer(2168): STOPPED: Shutdown requested 2018-07-02 07:34:55,268 INFO [Time-limited test] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,38972,1530516853959' ***** 2018-07-02 07:34:55,268 INFO [Time-limited test] regionserver.HRegionServer(2168): STOPPED: Shutdown requested 2018-07-02 07:34:55,268 INFO [RS:2;asf911:38972] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:34:55,268 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/running 2018-07-02 07:34:55,268 INFO [RS:0;asf911:46264] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:34:55,269 INFO [SplitLogWorker-asf911:38972] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:34:55,268 INFO [RS:1;asf911:42768] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:34:55,270 INFO [RS:1;asf911:42768] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:34:55,269 INFO [SplitLogWorker-asf911:38972] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,38972,1530516853959 exiting 2018-07-02 07:34:55,269 INFO [SplitLogWorker-asf911:46264] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:34:55,269 INFO [RS:0;asf911:46264] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:34:55,269 INFO [RS:2;asf911:38972] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:34:55,270 INFO [RS:1;asf911:42768] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:34:55,281 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:34:55,281 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:55,279 INFO [RS:0;asf911:46264] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:34:55,282 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:55,270 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:34:55,270 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:34:55,270 INFO [SplitLogWorker-asf911:46264] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,46264,1530516853823 exiting 2018-07-02 07:34:55,292 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:34:55,293 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:55,270 INFO [SplitLogWorker-asf911:42768] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:34:55,297 DEBUG [M:0;asf911:39498] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d48b0e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:55,282 INFO [RS:0;asf911:46264] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:34:55,297 DEBUG [M:0;asf911:39498] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@684a2b2e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:55,282 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:55,281 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/running 2018-07-02 07:34:55,281 INFO [RS:1;asf911:42768] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:34:55,281 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:34:55,281 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:34:55,281 INFO [RS:2;asf911:38972] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:34:55,298 INFO [RS:2;asf911:38972] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:34:55,298 INFO [RS:2;asf911:38972] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:55,298 DEBUG [RS:2;asf911:38972] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:34:55,299 INFO [RS:2;asf911:38972] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x5c5c6f91 to localhost:59178 2018-07-02 07:34:55,307 DEBUG [RS:2;asf911:38972] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,307 INFO [RS:2;asf911:38972] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:34:55,307 INFO [RS:2;asf911:38972] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:34:55,307 INFO [RS:2;asf911:38972] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:34:55,298 INFO [RS:0;asf911:46264] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:55,308 DEBUG [RS:0;asf911:46264] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:34:55,308 INFO [RS:0;asf911:46264] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x4c7d5474 to localhost:59178 2018-07-02 07:34:55,308 DEBUG [RS:0;asf911:46264] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,308 INFO [RS:0;asf911:46264] regionserver.HRegionServer(1399): Waiting on 1 regions to close 2018-07-02 07:34:55,308 DEBUG [RS:0;asf911:46264] regionserver.HRegionServer(1403): Online Regions={a2e46a0365d875b8253d213cfc9335b7=hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7.} 2018-07-02 07:34:55,298 INFO [RS:1;asf911:42768] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:55,309 DEBUG [RS:1;asf911:42768] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:34:55,297 INFO [M:0;asf911:39498] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,39498,1530516852236 2018-07-02 07:34:55,309 DEBUG [M:0;asf911:39498] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:34:55,309 INFO [M:0;asf911:39498] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x29772115 to localhost:59178 2018-07-02 07:34:55,309 DEBUG [M:0;asf911:39498] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,309 INFO [M:0;asf911:39498] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,39498,1530516852236; all regions closed. 2018-07-02 07:34:55,309 DEBUG [M:0;asf911:39498] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,297 INFO [SplitLogWorker-asf911:42768] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,42768,1530516853889 exiting 2018-07-02 07:34:55,309 INFO [M:0;asf911:39498] hbase.ChoreService(327): Chore service for: master/asf911:0 had [] on shutdown 2018-07-02 07:34:55,309 INFO [RS:1;asf911:42768] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x4fdbce1e to localhost:59178 2018-07-02 07:34:55,310 DEBUG [M:0;asf911:39498] master.HMaster(1292): Stopping service threads 2018-07-02 07:34:55,310 DEBUG [RS:1;asf911:42768] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,310 INFO [RS:1;asf911:42768] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,42768,1530516853889; all regions closed. 2018-07-02 07:34:55,311 INFO [RS:2;asf911:38972] regionserver.HRegionServer(1399): Waiting on 2 regions to close 2018-07-02 07:34:55,311 DEBUG [RS:2;asf911:38972] regionserver.HRegionServer(1403): Online Regions={fb68d1abb3b8182f9bd555d291e6d272=SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272., 1588230740=hbase:meta,,1.1588230740} 2018-07-02 07:34:55,318 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing a2e46a0365d875b8253d213cfc9335b7, disabling compactions & flushes 2018-07-02 07:34:55,318 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing 1588230740, disabling compactions & flushes 2018-07-02 07:34:55,318 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region hbase:meta,,1.1588230740 2018-07-02 07:34:55,318 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:55,319 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(2584): Flushing 1/1 column families, dataSize=78 B heapSize=232 B 2018-07-02 07:34:55,318 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1527): Closing fb68d1abb3b8182f9bd555d291e6d272, disabling compactions & flushes 2018-07-02 07:34:55,322 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1567): Updates disabled for region SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:55,322 INFO [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(2584): Flushing 1/1 column families, dataSize=31.25 KB heapSize=101.56 KB 2018-07-02 07:34:55,323 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:55,330 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(2584): Flushing 3/3 column families, dataSize=4.07 KB heapSize=6.94 KB 2018-07-02 07:34:55,345 INFO [M:0;asf911:39498] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:39498 2018-07-02 07:34:55,347 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:34:55,348 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:34:55,348 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:55,349 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:55,349 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:34:55,358 DEBUG [RS:1;asf911:42768] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:55,358 INFO [RS:1;asf911:42768] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C42768%2C1530516853889:(num 1530516857838) 2018-07-02 07:34:55,358 DEBUG [RS:1;asf911:42768] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,359 INFO [RS:1;asf911:42768] regionserver.Leases(149): Closed leases 2018-07-02 07:34:55,359 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:34:55,359 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:34:55,360 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:34:55,360 INFO [RS:1;asf911:42768] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,42768,1530516853889 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:34:55,370 INFO [RS:1;asf911:42768] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:34:55,370 INFO [RS:1;asf911:42768] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:34:55,371 INFO [RS:1;asf911:42768] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:34:55,371 INFO [RS:1;asf911:42768] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:34:55,372 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:34:55,377 DEBUG [M:0;asf911:39498] zookeeper.RecoverableZooKeeper(176): Node /cluster1/rs/asf911.gq1.ygridcore.net,39498,1530516852236 already deleted, retry=false 2018-07-02 07:34:55,382 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:39498-0x16459e9b4500000, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/backup-masters/asf911.gq1.ygridcore.net,39498,1530516852236 2018-07-02 07:34:55,384 INFO [M:0;asf911:39498] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,39498,1530516852236; zookeeper connection closed. 2018-07-02 07:34:55,482 INFO [RS:1;asf911:42768] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x7369b708 to localhost:59178 2018-07-02 07:34:55,485 DEBUG [RS:1;asf911:42768] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:55,486 INFO [RS:1;asf911:42768] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C42768%2C1530516853889,1 terminated 2018-07-02 07:34:55,489 INFO [RS:1;asf911:42768] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:42768 2018-07-02 07:34:55,498 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/rs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:55,498 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/rs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:55,499 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/rs 2018-07-02 07:34:55,499 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/rs 2018-07-02 07:34:55,499 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/rs/asf911.gq1.ygridcore.net,42768,1530516853889 2018-07-02 07:34:55,499 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:42768-0x16459e9b4500003, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/rs 2018-07-02 07:34:55,499 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/rs 2018-07-02 07:34:55,507 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,42768,1530516853889] 2018-07-02 07:34:55,507 INFO [RS:1;asf911:42768] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,42768,1530516853889; zookeeper connection closed. 2018-07-02 07:34:55,507 INFO [RegionServerTracker-0] master.ServerManager(597): Cluster shutdown set; asf911.gq1.ygridcore.net,42768,1530516853889 expired; onlineServers=2 2018-07-02 07:34:55,508 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@648bfe19] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@648bfe19 2018-07-02 07:34:55,555 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW]]} size 7657 2018-07-02 07:34:55,556 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741840_1016 size 7657 2018-07-02 07:34:55,556 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741839_1015{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW]]} size 4898 2018-07-02 07:34:55,556 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741840_1016 size 7657 2018-07-02 07:34:55,557 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741839_1015 size 4898 2018-07-02 07:34:55,557 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741839_1015 size 4898 2018-07-02 07:34:55,591 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:55,592 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:55,592 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|FINALIZED], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:55,593 INFO [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=31.25 KB at sequenceid=1007 (bloomFilter=true), to=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/.tmp/cf/4df7b03a1d8a4846861412295e8119f8 2018-07-02 07:34:55,687 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/.tmp/cf/4df7b03a1d8a4846861412295e8119f8 as hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/cf/4df7b03a1d8a4846861412295e8119f8 2018-07-02 07:34:55,699 INFO [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HStore(1070): Added hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/cf/4df7b03a1d8a4846861412295e8119f8, entries=1000, sequenceid=1007, filesize=40.5 K 2018-07-02 07:34:55,713 INFO [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(2793): Finished flush of dataSize ~31.25 KB/32000, heapSize ~101.80 KB/104240, currentSize=0 B/0 for fb68d1abb3b8182f9bd555d291e6d272 in 391ms, sequenceid=1007, compaction requested=false 2018-07-02 07:34:55,725 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] wal.WALSplitter(678): Wrote file=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/default/SyncRep/fb68d1abb3b8182f9bd555d291e6d272/recovered.edits/1010.seqid, newMaxSeqId=1010, maxSeqId=4 2018-07-02 07:34:55,729 INFO [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1681): Closed SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:55,729 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] handler.CloseRegionHandler(124): Closed SyncRep,,1530516871850.fb68d1abb3b8182f9bd555d291e6d272. 2018-07-02 07:34:55,956 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=3.68 KB at sequenceid=18 (bloomFilter=false), to=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/.tmp/info/556c222423ba48a38ce943fb115c1b87 2018-07-02 07:34:55,956 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/.tmp/info/71cdad61447c4630adfca73b4947eefc 2018-07-02 07:34:55,967 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/.tmp/info/71cdad61447c4630adfca73b4947eefc as hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/info/71cdad61447c4630adfca73b4947eefc 2018-07-02 07:34:55,976 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/info/71cdad61447c4630adfca73b4947eefc, entries=2, sequenceid=6, filesize=4.8 K 2018-07-02 07:34:55,982 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(2793): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for a2e46a0365d875b8253d213cfc9335b7 in 663ms, sequenceid=6, compaction requested=false 2018-07-02 07:34:56,006 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW]]} size 0 2018-07-02 07:34:56,006 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW]]} size 0 2018-07-02 07:34:56,007 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741842_1018 size 4996 2018-07-02 07:34:56,007 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=222 B at sequenceid=18 (bloomFilter=false), to=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/.tmp/rep_barrier/2b79b78792424dd5ba86e93f3d861292 2018-07-02 07:34:56,011 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/namespace/a2e46a0365d875b8253d213cfc9335b7/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2018-07-02 07:34:56,012 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:56,012 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed hbase:namespace,,1530516859348.a2e46a0365d875b8253d213cfc9335b7. 2018-07-02 07:34:56,031 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW]]} size 0 2018-07-02 07:34:56,032 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:56,032 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|FINALIZED]]} size 0 2018-07-02 07:34:56,033 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=172 B at sequenceid=18 (bloomFilter=false), to=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/.tmp/table/486d24f2f1c642fc936906a37cadd8d7 2018-07-02 07:34:56,041 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/.tmp/info/556c222423ba48a38ce943fb115c1b87 as hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/info/556c222423ba48a38ce943fb115c1b87 2018-07-02 07:34:56,050 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/info/556c222423ba48a38ce943fb115c1b87, entries=25, sequenceid=18, filesize=7.5 K 2018-07-02 07:34:56,054 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/.tmp/rep_barrier/2b79b78792424dd5ba86e93f3d861292 as hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/rep_barrier/2b79b78792424dd5ba86e93f3d861292 2018-07-02 07:34:56,063 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/rep_barrier/2b79b78792424dd5ba86e93f3d861292, entries=2, sequenceid=18, filesize=4.9 K 2018-07-02 07:34:56,067 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/.tmp/table/486d24f2f1c642fc936906a37cadd8d7 as hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/table/486d24f2f1c642fc936906a37cadd8d7 2018-07-02 07:34:56,075 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/table/486d24f2f1c642fc936906a37cadd8d7, entries=4, sequenceid=18, filesize=4.7 K 2018-07-02 07:34:56,078 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(2793): Finished flush of dataSize ~4.07 KB/4165, heapSize ~7.64 KB/7824, currentSize=0 B/0 for 1588230740 in 759ms, sequenceid=18, compaction requested=false 2018-07-02 07:34:56,092 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/data/hbase/meta/1588230740/recovered.edits/21.seqid, newMaxSeqId=21, maxSeqId=1 2018-07-02 07:34:56,093 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-07-02 07:34:56,094 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed hbase:meta,,1.1588230740 2018-07-02 07:34:56,095 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed hbase:meta,,1.1588230740 2018-07-02 07:34:56,110 INFO [RS:0;asf911:46264] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,46264,1530516853823; all regions closed. 2018-07-02 07:34:56,121 INFO [RS:2;asf911:38972] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,38972,1530516853959; all regions closed. 2018-07-02 07:34:56,122 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741832_1008{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW]]} size 1409 2018-07-02 07:34:56,122 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,46264,1530516853823/asf911.gq1.ygridcore.net%2C46264%2C1530516853823.1530516857838 not finished, retry = 0 2018-07-02 07:34:56,122 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741832_1008 size 1409 2018-07-02 07:34:56,123 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741832_1008 size 1409 2018-07-02 07:34:56,127 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW]]} size 0 2018-07-02 07:34:56,127 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW]]} size 0 2018-07-02 07:34:56,127 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741833_1009 size 5846 2018-07-02 07:34:56,130 DEBUG [RS:2;asf911:38972] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:56,130 INFO [RS:2;asf911:38972] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C38972%2C1530516853959.meta:.meta(num 1530516858322) 2018-07-02 07:34:56,137 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:56,137 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:56,137 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:34:56,145 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:56,145 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:56,145 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW]]} size 0 2018-07-02 07:34:56,149 DEBUG [RS:2;asf911:38972] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:56,149 INFO [RS:2;asf911:38972] wal.AbstractFSWAL(863): Closed WAL: DualAsyncFSWAL asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1:.syncrep(num 1530516886462) 2018-07-02 07:34:56,158 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/WALs/asf911.gq1.ygridcore.net,38972,1530516853959/asf911.gq1.ygridcore.net%2C38972%2C1530516853959.1530516857838 not finished, retry = 0 2018-07-02 07:34:56,158 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741831_1007{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-fb979981-ad7d-4df7-af08-69017228b672:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|RBW]]} size 617 2018-07-02 07:34:56,158 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741831_1007 size 617 2018-07-02 07:34:56,158 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741831_1007 size 617 2018-07-02 07:34:56,227 DEBUG [RS:0;asf911:46264] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:56,227 INFO [RS:0;asf911:46264] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C46264%2C1530516853823:(num 1530516857838) 2018-07-02 07:34:56,227 DEBUG [RS:0;asf911:46264] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:56,227 INFO [RS:0;asf911:46264] regionserver.Leases(149): Closed leases 2018-07-02 07:34:56,228 INFO [RS:0;asf911:46264] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,46264,1530516853823 Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:34:56,228 INFO [RS:0;asf911:46264] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:34:56,228 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:34:56,228 INFO [RS:0;asf911:46264] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:34:56,228 INFO [RS:0;asf911:46264] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:34:56,229 INFO [RS:0;asf911:46264] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:34:56,262 DEBUG [RS:2;asf911:38972] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/oldWALs 2018-07-02 07:34:56,262 INFO [RS:2;asf911:38972] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C38972%2C1530516853959:(num 1530516857838) 2018-07-02 07:34:56,263 DEBUG [RS:2;asf911:38972] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:56,263 INFO [RS:2;asf911:38972] regionserver.Leases(149): Closed leases 2018-07-02 07:34:56,263 INFO [RS:2;asf911:38972] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,38972,1530516853959 Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:34:56,263 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:34:56,264 INFO [RS:2;asf911:38972] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:34:56,370 INFO [RS:0;asf911:46264] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x02558922 to localhost:59178 2018-07-02 07:34:56,370 DEBUG [RS:0;asf911:46264] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:56,370 INFO [RS:0;asf911:46264] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C46264%2C1530516853823,1 terminated 2018-07-02 07:34:56,372 INFO [RS:0;asf911:46264] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:46264 2018-07-02 07:34:56,392 INFO [RS:2;asf911:38972] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x5fa86c83 to localhost:59178 2018-07-02 07:34:56,392 DEBUG [RS:2;asf911:38972] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:56,393 INFO [RS:2;asf911:38972] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1,1 terminated 2018-07-02 07:34:56,393 INFO [RS:2;asf911:38972] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-1.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C38972%2C1530516853959,1 terminated 2018-07-02 07:34:56,394 INFO [RS:2;asf911:38972] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:38972 2018-07-02 07:34:56,400 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/rs/asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:56,400 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/rs/asf911.gq1.ygridcore.net,46264,1530516853823 2018-07-02 07:34:56,400 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster1/rs 2018-07-02 07:34:56,407 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38972-0x16459e9b4500004, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/rs/asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:56,408 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:46264-0x16459e9b4500002, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/rs/asf911.gq1.ygridcore.net,38972,1530516853959 2018-07-02 07:34:56,415 INFO [RS:0;asf911:46264] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,46264,1530516853823; zookeeper connection closed. 2018-07-02 07:34:56,415 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,46264,1530516853823] 2018-07-02 07:34:56,415 INFO [RegionServerTracker-0] master.ServerManager(597): Cluster shutdown set; asf911.gq1.ygridcore.net,46264,1530516853823 expired; onlineServers=1 2018-07-02 07:34:56,416 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,38972,1530516853959] 2018-07-02 07:34:56,416 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@45cfbfea] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@45cfbfea 2018-07-02 07:34:56,416 INFO [RegionServerTracker-0] master.ServerManager(597): Cluster shutdown set; asf911.gq1.ygridcore.net,38972,1530516853959 expired; onlineServers=0 2018-07-02 07:34:56,416 INFO [RegionServerTracker-0] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,51263,1530516853697' ***** 2018-07-02 07:34:56,416 INFO [RegionServerTracker-0] regionserver.HRegionServer(2168): STOPPED: Cluster shutdown set; onlineServer=0 2018-07-02 07:34:56,418 DEBUG [M:1;asf911:51263] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6288a94c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:56,418 INFO [M:1;asf911:51263] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,51263,1530516853697 2018-07-02 07:34:56,418 DEBUG [M:1;asf911:51263] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:34:56,418 INFO [M:1;asf911:51263] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,51263,1530516853697; all regions closed. 2018-07-02 07:34:56,418 DEBUG [M:1;asf911:51263] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:56,418 INFO [M:1;asf911:51263] hbase.ChoreService(327): Chore service for: master/asf911:0 had [[ScheduledChore: Name: asf911.gq1.ygridcore.net,51263,1530516853697-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: asf911.gq1.ygridcore.net,51263,1530516853697-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: ReplicationBarrierCleaner Period: 43200000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 600000 Unit: MILLISECONDS], [ScheduledChore: Name: CatalogJanitor-asf911:51263 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: asf911.gq1.ygridcore.net,51263,1530516853697-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: asf911.gq1.ygridcore.net,51263,1530516853697-RegionNormalizerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: HFileCleaner Period: 600000 Unit: MILLISECONDS], [ScheduledChore: Name: asf911.gq1.ygridcore.net,51263,1530516853697-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: FlushedSequenceIdFlusher Period: 10800000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:34:56,419 INFO [M:1;asf911:51263] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2018-07-02 07:34:56,419 INFO [M:1;asf911:51263] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2018-07-02 07:34:56,419 DEBUG [M:1;asf911:51263] master.HMaster(1292): Stopping service threads 2018-07-02 07:34:56,419 DEBUG [Thread-158-HFileCleaner.small.0-1530516855419] cleaner.HFileCleaner(253): Exit Thread[Thread-158-HFileCleaner.small.0-1530516855419,5,FailOnTimeoutGroup] 2018-07-02 07:34:56,419 DEBUG [Thread-158-HFileCleaner.large.0-1530516855419] cleaner.HFileCleaner(253): Exit Thread[Thread-158-HFileCleaner.large.0-1530516855419,5,FailOnTimeoutGroup] 2018-07-02 07:34:56,419 DEBUG [OldWALsCleaner-0] cleaner.LogCleaner(176): Exiting cleaner. 2018-07-02 07:34:56,420 DEBUG [OldWALsCleaner-1] cleaner.LogCleaner(176): Exiting cleaner. 2018-07-02 07:34:56,423 INFO [RS:2;asf911:38972] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,38972,1530516853959; zookeeper connection closed. 2018-07-02 07:34:56,424 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3de89a30] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3de89a30 2018-07-02 07:34:56,424 INFO [Time-limited test] util.JVMClusterUtil(326): Shutdown of 2 master(s) and 3 regionserver(s) complete 2018-07-02 07:34:56,432 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster1/master 2018-07-02 07:34:56,440 DEBUG [M:1;asf911:51263] zookeeper.RecoverableZooKeeper(176): Node /cluster1/master already deleted, retry=false 2018-07-02 07:34:56,440 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Set watcher on znode that does not yet exist, /cluster1/master 2018-07-02 07:34:56,440 DEBUG [M:1;asf911:51263] master.ActiveMasterManager(280): master:51263-0x16459e9b4500001, quorum=localhost:59178, baseZNode=/cluster1 Failed delete of our master address node; KeeperErrorCode = NoNode for /cluster1/master 2018-07-02 07:34:56,444 INFO [M:1;asf911:51263] master.ServerManager(1056): Writing .lastflushedseqids file at: hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/.lastflushedseqids 2018-07-02 07:34:56,478 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|FINALIZED]]} size 0 2018-07-02 07:34:56,479 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-7c9c0b2f-aef6-4160-b1e0-2b69b7f95ac9:NORMAL:127.0.0.1:33954|FINALIZED], ReplicaUC[[DISK]DS-41ea254c-eaee-49a3-a66c-436f1b7e08ee:NORMAL:127.0.0.1:48785|FINALIZED]]} size 0 2018-07-02 07:34:56,479 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741844_1020 size 177 2018-07-02 07:34:56,480 INFO [M:1;asf911:51263] assignment.AssignmentManager(234): Stopping assignment manager 2018-07-02 07:34:56,484 INFO [M:1;asf911:51263] procedure2.RemoteProcedureDispatcher(116): Stopping procedure remote dispatcher 2018-07-02 07:34:56,487 INFO [M:1;asf911:51263] wal.WALProcedureStore(290): Stopping the WAL Procedure Store, isAbort=false 2018-07-02 07:34:56,505 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:45556 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 202 2018-07-02 07:34:56,506 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48785 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 202 2018-07-02 07:34:56,506 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33954 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a61d23bd-aa4f-49fc-9440-b11d265cb2a8:NORMAL:127.0.0.1:45556|RBW], ReplicaUC[[DISK]DS-56d6abd0-3a09-4c43-b351-0b985710fa52:NORMAL:127.0.0.1:48785|RBW], ReplicaUC[[DISK]DS-137fa992-0531-460e-8da1-5d0327e9db5c:NORMAL:127.0.0.1:33954|RBW]]} size 202 2018-07-02 07:34:56,507 INFO [M:1;asf911:51263] hbase.ChoreService(327): Chore service for: master/asf911:0.splitLogManager. had [[ScheduledChore: Name: SplitLogManager Timeout Monitor Period: 1000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:34:56,507 INFO [M:1;asf911:51263] flush.MasterFlushTableProcedureManager(81): stop: server shutting down. 2018-07-02 07:34:56,509 INFO [M:1;asf911:51263] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:51263 2018-07-02 07:34:56,515 DEBUG [M:1;asf911:51263] zookeeper.RecoverableZooKeeper(176): Node /cluster1/rs/asf911.gq1.ygridcore.net,51263,1530516853697 already deleted, retry=false 2018-07-02 07:34:56,523 INFO [M:1;asf911:51263] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,51263,1530516853697; zookeeper connection closed. 2018-07-02 07:34:56,526 WARN [Time-limited test] datanode.DirectoryScanner(529): DirectoryScanner: shutdown has been called 2018-07-02 07:34:56,541 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-07-02 07:34:56,646 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:38505] datanode.IncrementalBlockReportManager(132): IncrementalBlockReportManager interrupted 2018-07-02 07:34:56,647 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:38505] datanode.BPServiceActor(670): Ending block pool service for: Block pool BP-1443818035-67.195.81.155-1530516847306 (Datanode Uuid b63c29a3-a7bf-4a69-a4ea-7f6519264b08) service to localhost/127.0.0.1:38505 2018-07-02 07:34:56,654 WARN [Time-limited test] datanode.DirectoryScanner(529): DirectoryScanner: shutdown has been called 2018-07-02 07:34:56,667 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-07-02 07:34:56,773 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:38505] datanode.IncrementalBlockReportManager(132): IncrementalBlockReportManager interrupted 2018-07-02 07:34:56,773 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:38505] datanode.BPServiceActor(670): Ending block pool service for: Block pool BP-1443818035-67.195.81.155-1530516847306 (Datanode Uuid e396df7c-0162-458b-b04d-8b308b84c161) service to localhost/127.0.0.1:38505 2018-07-02 07:34:56,781 WARN [Time-limited test] datanode.DirectoryScanner(529): DirectoryScanner: shutdown has been called 2018-07-02 07:34:56,794 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-07-02 07:34:56,900 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:38505] datanode.IncrementalBlockReportManager(132): IncrementalBlockReportManager interrupted 2018-07-02 07:34:56,901 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/daf411a8-56e1-63f0-483f-728906d2da7e/cluster_ee72fc15-b2bb-b35f-fb0d-0323a780aebe/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:38505] datanode.BPServiceActor(670): Ending block pool service for: Block pool BP-1443818035-67.195.81.155-1530516847306 (Datanode Uuid deb1bee5-f7f2-4770-8e01-3677f7c2c853) service to localhost/127.0.0.1:38505 2018-07-02 07:34:56,933 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-07-02 07:34:57,040 INFO [asf911:38428Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38428%2C1530516865163]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 at position: -1 2018-07-02 07:34:57,040 INFO [asf911:43014Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C43014%2C1530516865056]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250 at position: -1 2018-07-02 07:34:57,041 INFO [asf911:33727Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C33727%2C1530516865112]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 at position: -1 2018-07-02 07:34:57,065 INFO [Time-limited test] hbase.HBaseTestingUtility(1103): Minicluster is down 2018-07-02 07:34:57,067 INFO [Thread-1561] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,43014,1530516865056' ***** 2018-07-02 07:34:57,068 INFO [Thread-1561] regionserver.HRegionServer(2168): STOPPED: Stop RS for test 2018-07-02 07:34:57,068 INFO [RS:0;asf911:43014] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:34:57,068 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(108): Waiting for [asf911.gq1.ygridcore.net,43014,1530516865056] to be listed as dead in master 2018-07-02 07:34:57,068 INFO [RS:0;asf911:43014] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:34:57,068 INFO [SplitLogWorker-asf911:43014] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:34:57,068 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:34:57,068 INFO [SplitLogWorker-asf911:43014] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,43014,1530516865056 exiting 2018-07-02 07:34:57,071 INFO [RS:0;asf911:43014] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:34:57,071 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:34:57,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3574): Client=jenkins//67.195.81.155 transit current cluster state to DOWNGRADE_ACTIVE in a synchronous replication peer id=1 2018-07-02 07:34:57,072 INFO [RS:0;asf911:43014] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:34:57,072 INFO [RS:0;asf911:43014] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:57,072 DEBUG [RS:0;asf911:43014] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:34:57,072 INFO [RS:0;asf911:43014] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x703fa212 to localhost:59178 2018-07-02 07:34:57,073 DEBUG [RS:0;asf911:43014] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:57,073 INFO [RS:0;asf911:43014] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,43014,1530516865056; all regions closed. 2018-07-02 07:34:57,079 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW]]} size 91 2018-07-02 07:34:57,079 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250 not finished, retry = 0 2018-07-02 07:34:57,080 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741840_1016 size 91 2018-07-02 07:34:57,080 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741840_1016 size 91 2018-07-02 07:34:57,100 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:34:57,162 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:34:57,186 DEBUG [RS:0;asf911:43014] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:57,186 WARN [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C43014%2C1530516865056,1] regionserver.WALEntryStream(208): Couldn't get file length information about log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250, it was not closed cleanly currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250 at position: 0 2018-07-02 07:34:57,186 INFO [RS:0;asf911:43014] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C43014%2C1530516865056:(num 1530516883250) 2018-07-02 07:34:57,186 DEBUG [RS:0;asf911:43014] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:57,186 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C43014%2C1530516865056,1] regionserver.WALEntryStream(250): Reached the end of log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056/asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250 2018-07-02 07:34:57,187 INFO [RS:0;asf911:43014] regionserver.Leases(149): Closed leases 2018-07-02 07:34:57,190 INFO [RS:0;asf911:43014] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,43014,1530516865056 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:34:57,190 INFO [RS:0;asf911:43014] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:34:57,190 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:34:57,190 INFO [RS:0;asf911:43014] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:34:57,190 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C43014%2C1530516865056,1] regionserver.ReplicationSourceManager(693): Removing 1 logs in the list: [asf911.gq1.ygridcore.net%2C43014%2C1530516865056.1530516883250] 2018-07-02 07:34:57,190 INFO [RS:0;asf911:43014] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:34:57,190 INFO [RS:0;asf911:43014] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:34:57,192 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C43014%2C1530516865056,1] regionserver.ReplicationSourceManager(707): Removing 0 logs from remote dir hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/remoteWALs in the list: [] 2018-07-02 07:34:57,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] procedure2.ProcedureExecutor(887): Stored pid=23, state=RUNNABLE:PRE_PEER_SYNC_REPLICATION_STATE_TRANSITION; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure 2018-07-02 07:34:57,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:34:57,307 INFO [RS:0;asf911:43014] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x5496e3df to localhost:59178 2018-07-02 07:34:57,310 DEBUG [RS:0;asf911:43014] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:57,310 INFO [RS:0;asf911:43014] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C43014%2C1530516865056,1 terminated 2018-07-02 07:34:57,312 INFO [RS:0;asf911:43014] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:43014 2018-07-02 07:34:57,324 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:57,324 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:57,324 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:57,324 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 2018-07-02 07:34:57,324 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:43014-0x16459e9b450000d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:57,332 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,43014,1530516865056] 2018-07-02 07:34:57,332 INFO [RS:0;asf911:43014] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,43014,1530516865056; zookeeper connection closed. 2018-07-02 07:34:57,332 INFO [RegionServerTracker-0] master.ServerManager(604): Processing expiration of asf911.gq1.ygridcore.net,43014,1530516865056 on asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:57,336 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4e7806a2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4e7806a2 2018-07-02 07:34:57,340 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:57,340 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:57,341 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:57,341 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:57,345 INFO [Time-limited test-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 znode expired, triggering replicatorRemoved event 2018-07-02 07:34:57,346 INFO [Time-limited test-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,43014,1530516865056 znode expired, triggering replicatorRemoved event 2018-07-02 07:34:57,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:34:57,348 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:57,348 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:57,350 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:57,350 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:57,350 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:57,350 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:57,399 INFO [PEWorker-16] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=25, ppid=23, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}, {pid=26, ppid=23, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure}] 2018-07-02 07:34:57,457 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(887): Stored pid=24, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,43014,1530516865056, splitWal=true, meta=false 2018-07-02 07:34:57,457 DEBUG [RegionServerTracker-0] assignment.AssignmentManager(1321): Added=asf911.gq1.ygridcore.net,43014,1530516865056 to dead servers, submitted shutdown handler to be executed meta=false 2018-07-02 07:34:57,461 INFO [PEWorker-4] procedure.ServerCrashProcedure(118): Start pid=24, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,43014,1530516865056, splitWal=true, meta=false 2018-07-02 07:34:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:34:57,633 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(239): Splitting WALs pid=24, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,43014,1530516865056, splitWal=true, meta=false 2018-07-02 07:34:57,636 DEBUG [PEWorker-4] master.MasterWalManager(283): Renamed region directory: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056-splitting 2018-07-02 07:34:57,636 INFO [PEWorker-4] master.SplitLogManager(461): dead splitlog workers [asf911.gq1.ygridcore.net,43014,1530516865056] 2018-07-02 07:34:57,638 INFO [PEWorker-4] master.SplitLogManager(177): hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056-splitting is empty dir, no logs to split 2018-07-02 07:34:57,638 INFO [PEWorker-4] master.SplitLogManager(241): Started splitting 0 logs in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056-splitting] for [asf911.gq1.ygridcore.net,43014,1530516865056] 2018-07-02 07:34:57,641 INFO [PEWorker-4] master.SplitLogManager(293): finished splitting (more than or equal to) 0 bytes in 0 log files in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,43014,1530516865056-splitting] in 3ms 2018-07-02 07:34:57,641 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(247): Done splitting WALs pid=24, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,43014,1530516865056, splitWal=true, meta=false 2018-07-02 07:34:57,670 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:57,670 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0] regionserver.RefreshPeerCallable(55): Received a peer change event, peerId=1, type=TRANSIT_SYNC_REPLICATION_STATE 2018-07-02 07:34:57,672 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,33727,1530516865112 suceeded 2018-07-02 07:34:57,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] replication.RefreshPeerProcedure(148): Refresh peer 1 for TRANSIT_SYNC_REPLICATION_STATE on asf911.gq1.ygridcore.net,38428,1530516865163 suceeded 2018-07-02 07:34:57,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:34:57,857 INFO [PEWorker-7] procedure2.ProcedureExecutor(1266): Finished pid=26, ppid=23, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 277msec 2018-07-02 07:34:58,026 INFO [PEWorker-14] procedure2.ProcedureExecutor(1266): Finished pid=24, state=SUCCESS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,43014,1530516865056, splitWal=true, meta=false in 522msec 2018-07-02 07:34:58,068 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(111): Server [asf911.gq1.ygridcore.net,43014,1530516865056] marked as dead, waiting for it to finish dead processing 2018-07-02 07:34:58,068 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(117): Server [asf911.gq1.ygridcore.net,43014,1530516865056] done with server shutdown processing 2018-07-02 07:34:58,089 INFO [Thread-1561] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:34:58,089 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:34:58,089 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:34:58,090 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:34:58,090 INFO [Thread-1561] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:34:58,090 INFO [Thread-1561] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:34:58,094 INFO [Thread-1561] ipc.NettyRpcServer(110): Bind to /67.195.81.155:57468 2018-07-02 07:34:58,094 INFO [Thread-1561] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:58,095 INFO [Thread-1561] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:34:58,098 INFO [Thread-1561] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:58,099 INFO [Thread-1561] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:34:58,100 INFO [Thread-1561] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:57468 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:58,115 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:574680x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:58,117 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(543): regionserver:57468-0x16459e9b4500035 connected 2018-07-02 07:34:58,117 DEBUG [Thread-1561] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/master 2018-07-02 07:34:58,119 DEBUG [Thread-1561] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/running 2018-07-02 07:34:58,120 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=57468 2018-07-02 07:34:58,121 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=57468 2018-07-02 07:34:58,122 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=57468 2018-07-02 07:34:58,128 INFO [RS:3;asf911:57468] regionserver.HRegionServer(874): ClusterId : 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:34:58,128 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:34:58,158 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:34:58,158 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:34:58,190 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:34:58,193 INFO [RS:3;asf911:57468] zookeeper.ReadOnlyZKClient(139): Connect 0x21fbb142 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:58,201 INFO [PEWorker-8] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=23, state=RUNNABLE:REPLAY_REMOTE_WAL_IN_PEER; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure; resume parent processing. 2018-07-02 07:34:58,202 INFO [PEWorker-8] procedure2.ProcedureExecutor(1266): Finished pid=25, ppid=23, state=SUCCESS; org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure in 277msec 2018-07-02 07:34:58,204 INFO [PEWorker-12] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=27, ppid=23, state=RUNNABLE:RENAME_SYNC_REPLICATION_WALS_DIR; org.apache.hadoop.hbase.master.replication.RecoverStandbyProcedure}] 2018-07-02 07:34:58,216 DEBUG [RS:3;asf911:57468] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f3c5b0d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:58,217 DEBUG [RS:3;asf911:57468] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b03da32, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:34:58,219 DEBUG [RS:3;asf911:57468] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:3;asf911:57468 2018-07-02 07:34:58,219 INFO [RS:3;asf911:57468] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:34:58,219 INFO [RS:3;asf911:57468] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:34:58,220 INFO [RS:3;asf911:57468] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=57468, startcode=1530516898088 2018-07-02 07:34:58,223 INFO [RS-EventLoopGroup-9-6] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:38053, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:34:58,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,224 DEBUG [RS:3;asf911:57468] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 2018-07-02 07:34:58,224 DEBUG [RS:3;asf911:57468] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:42386 2018-07-02 07:34:58,224 DEBUG [RS:3;asf911:57468] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:34:58,274 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:58,274 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:58,274 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:58,274 DEBUG [RS:3;asf911:57468] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,274 WARN [RS:3;asf911:57468] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:34:58,274 INFO [RS:3;asf911:57468] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:34:58,275 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:58,274 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,57468,1530516898088] 2018-07-02 07:34:58,275 DEBUG [RS:3;asf911:57468] regionserver.HRegionServer(1815): logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,275 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:58,275 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,275 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:58,279 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:58,281 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,291 INFO [PEWorker-11] replication.SyncReplicationReplayWALManager(137): Renamed dir from hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/remoteWALs/1 to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/remoteWALs/1-replay for peer id=1 2018-07-02 07:34:58,311 DEBUG [RS:3;asf911:57468] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:58,312 DEBUG [RS:3;asf911:57468] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:58,313 DEBUG [RS:3;asf911:57468] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,314 DEBUG [RS:3;asf911:57468] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:34:58,315 INFO [RS:3;asf911:57468] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:34:58,318 INFO [RS:3;asf911:57468] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:34:58,319 INFO [RS:3;asf911:57468] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:34:58,319 INFO [RS:3;asf911:57468] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:34:58,323 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:58,323 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:58,323 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:58,323 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:58,323 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:58,323 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:58,324 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:34:58,324 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:34:58,324 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:34:58,324 DEBUG [RS:3;asf911:57468] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:34:58,348 INFO [RS:3;asf911:57468] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:34:58,348 INFO [SplitLogWorker-asf911:57468] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,57468,1530516898088 starting 2018-07-02 07:34:58,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:34:58,366 INFO [RS:3;asf911:57468] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:34:58,378 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager(257): Current list of replicators: [asf911.gq1.ygridcore.net,33727,1530516865112, asf911.gq1.ygridcore.net,38428,1530516865163, asf911.gq1.ygridcore.net,43014,1530516865056] other RSs: [asf911.gq1.ygridcore.net,33727,1530516865112, asf911.gq1.ygridcore.net,38428,1530516865163, asf911.gq1.ygridcore.net,57468,1530516898088] 2018-07-02 07:34:58,389 INFO [RS:3;asf911:57468] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,57468,1530516898088, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:57468, sessionid=0x16459e9b4500035 2018-07-02 07:34:58,389 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:34:58,389 INFO [Thread-1561] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,33727,1530516865112' ***** 2018-07-02 07:34:58,389 INFO [Thread-1561] regionserver.HRegionServer(2168): STOPPED: Stop RS for test 2018-07-02 07:34:58,389 DEBUG [RS:3;asf911:57468] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,389 DEBUG [RS:3;asf911:57468] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,57468,1530516898088' 2018-07-02 07:34:58,389 DEBUG [RS:3;asf911:57468] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/flush-table-proc/abort' 2018-07-02 07:34:58,389 INFO [RS:1;asf911:33727] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:34:58,389 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(108): Waiting for [asf911.gq1.ygridcore.net,33727,1530516865112] to be listed as dead in master 2018-07-02 07:34:58,390 INFO [RS:1;asf911:33727] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:34:58,390 INFO [SplitLogWorker-asf911:33727] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:34:58,390 INFO [SplitLogWorker-asf911:33727] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,33727,1530516865112 exiting 2018-07-02 07:34:58,390 INFO [RS:1;asf911:33727] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:34:58,392 INFO [RS:1;asf911:33727] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:34:58,390 DEBUG [RS:3;asf911:57468] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/flush-table-proc/acquired' 2018-07-02 07:34:58,390 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:34:58,392 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:34:58,393 INFO [RS:1;asf911:33727] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:58,393 DEBUG [RS:1;asf911:33727] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:34:58,393 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:34:58,395 INFO [RS:1;asf911:33727] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x40eff960 to localhost:59178 2018-07-02 07:34:58,395 DEBUG [RS:1;asf911:33727] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:58,395 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing d1a74048f8e137b8647beefb747aafba, disabling compactions & flushes 2018-07-02 07:34:58,395 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:58,395 INFO [RS:1;asf911:33727] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:34:58,395 INFO [RS:1;asf911:33727] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:34:58,396 INFO [RS:1;asf911:33727] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:34:58,395 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:34:58,395 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:34:58,396 DEBUG [RS:3;asf911:57468] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:58,396 INFO [RS:1;asf911:33727] regionserver.HRegionServer(1399): Waiting on 2 regions to close 2018-07-02 07:34:58,395 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(2584): Flushing 1/1 column families, dataSize=78 B heapSize=232 B 2018-07-02 07:34:58,397 DEBUG [RS:1;asf911:33727] regionserver.HRegionServer(1403): Online Regions={1588230740=hbase:meta,,1.1588230740, d1a74048f8e137b8647beefb747aafba=hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.} 2018-07-02 07:34:58,397 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing 1588230740, disabling compactions & flushes 2018-07-02 07:34:58,398 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region hbase:meta,,1.1588230740 2018-07-02 07:34:58,396 DEBUG [RS:3;asf911:57468] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,57468,1530516898088' 2018-07-02 07:34:58,398 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(2584): Flushing 3/3 column families, dataSize=2.53 KB heapSize=4.37 KB 2018-07-02 07:34:58,398 DEBUG [RS:3;asf911:57468] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/online-snapshot/abort' 2018-07-02 07:34:58,416 DEBUG [RS:3;asf911:57468] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/online-snapshot/acquired' 2018-07-02 07:34:58,416 DEBUG [RS:3;asf911:57468] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:34:58,416 INFO [RS:3;asf911:57468] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:34:58,416 INFO [RS:3;asf911:57468] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:34:58,427 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741844_1020{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW]]} size 7077 2018-07-02 07:34:58,428 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741844_1020 size 7077 2018-07-02 07:34:58,429 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741843_1019{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW]]} size 4898 2018-07-02 07:34:58,429 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741843_1019 size 4898 2018-07-02 07:34:58,431 INFO [RS:3;asf911:57468.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x7d5779ca to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:34:58,433 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741843_1019 size 4898 2018-07-02 07:34:58,433 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741844_1020 size 7077 2018-07-02 07:34:58,436 INFO [PEWorker-11] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=28, ppid=27, state=RUNNABLE:ASSIGN_WORKER; org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure}] 2018-07-02 07:34:58,450 DEBUG [RS:3;asf911:57468.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@8912e4b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:34:58,451 INFO [RS:3;asf911:57468.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:34:58,462 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:34:58,490 DEBUG [RS:3;asf911:57468.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:34:58,493 DEBUG [RS:3;asf911:57468.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500038 connected 2018-07-02 07:34:58,540 INFO [RS:3;asf911:57468.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:34:58,591 INFO [PEWorker-2] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=29, ppid=28, state=RUNNABLE; org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALRemoteProcedure}] 2018-07-02 07:34:58,829 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.25 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/info/d9abe0ce89514b5299447b7098ab8048 2018-07-02 07:34:58,830 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/.tmp/info/bc91ddc16ad54a6d9efa5b724ba1622f 2018-07-02 07:34:58,844 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/.tmp/info/bc91ddc16ad54a6d9efa5b724ba1622f as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info/bc91ddc16ad54a6d9efa5b724ba1622f 2018-07-02 07:34:58,853 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=33727] ipc.CallRunner(142): callId: 49 service: AdminService methodName: ExecuteProcedures size: 233 connection: 67.195.81.155:48377 deadline: 1530516958851, exception=org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping 2018-07-02 07:34:58,855 WARN [RSProcedureDispatcher-pool13-t18] procedure.RSProcedureDispatcher$AbstractRSRemoteCall(212): Failed dispatch to server=asf911.gq1.ygridcore.net,33727,1530516865112 try=0 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:58,858 WARN [RSProcedureDispatcher-pool13-t18] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:58,863 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info/bc91ddc16ad54a6d9efa5b724ba1622f, entries=2, sequenceid=6, filesize=4.8 K 2018-07-02 07:34:58,871 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(2793): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for d1a74048f8e137b8647beefb747aafba in 476ms, sequenceid=6, compaction requested=false 2018-07-02 07:34:58,889 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:58,889 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:58,890 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741845_1021 size 4884 2018-07-02 07:34:58,891 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=111 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/rep_barrier/a4a715b5bf8d4f2ba86975d15491dfaa 2018-07-02 07:34:58,892 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2018-07-02 07:34:58,895 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:58,895 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:34:58,943 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:34:58,943 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:34:58,943 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|FINALIZED], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:34:58,944 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=172 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/table/ea61da4dcbf64bd786a9827f6780325e 2018-07-02 07:34:58,953 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/info/d9abe0ce89514b5299447b7098ab8048 as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048 2018-07-02 07:34:58,960 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048, entries=20, sequenceid=14, filesize=6.9 K 2018-07-02 07:34:58,963 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/rep_barrier/a4a715b5bf8d4f2ba86975d15491dfaa as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier/a4a715b5bf8d4f2ba86975d15491dfaa 2018-07-02 07:34:58,977 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier/a4a715b5bf8d4f2ba86975d15491dfaa, entries=1, sequenceid=14, filesize=4.8 K 2018-07-02 07:34:58,985 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/table/ea61da4dcbf64bd786a9827f6780325e as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table/ea61da4dcbf64bd786a9827f6780325e 2018-07-02 07:34:58,993 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table/ea61da4dcbf64bd786a9827f6780325e, entries=4, sequenceid=14, filesize=4.7 K 2018-07-02 07:34:58,995 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(2793): Finished flush of dataSize ~2.53 KB/2591, heapSize ~5.07 KB/5192, currentSize=0 B/0 for 1588230740 in 597ms, sequenceid=14, compaction requested=false 2018-07-02 07:34:59,012 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=33727] ipc.CallRunner(142): callId: 50 service: AdminService methodName: ExecuteProcedures size: 233 connection: 67.195.81.155:48377 deadline: 1530516959011, exception=org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping 2018-07-02 07:34:59,012 WARN [RSProcedureDispatcher-pool13-t19] procedure.RSProcedureDispatcher$AbstractRSRemoteCall(212): Failed dispatch to server=asf911.gq1.ygridcore.net,33727,1530516865112 try=0 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,016 WARN [RSProcedureDispatcher-pool13-t19] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,013 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2018-07-02 07:34:59,019 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-07-02 07:34:59,019 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed hbase:meta,,1.1588230740 2018-07-02 07:34:59,019 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed hbase:meta,,1.1588230740 2018-07-02 07:34:59,169 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=33727] ipc.CallRunner(142): callId: 51 service: AdminService methodName: ExecuteProcedures size: 233 connection: 67.195.81.155:48377 deadline: 1530516959169, exception=org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping 2018-07-02 07:34:59,170 WARN [RSProcedureDispatcher-pool13-t20] procedure.RSProcedureDispatcher$AbstractRSRemoteCall(212): Failed dispatch to server=asf911.gq1.ygridcore.net,33727,1530516865112 try=0 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,172 WARN [RSProcedureDispatcher-pool13-t20] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,199 INFO [RS:1;asf911:33727] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,33727,1530516865112; all regions closed. 2018-07-02 07:34:59,217 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741841_1017{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW]]} size 791 2018-07-02 07:34:59,217 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.meta.1530516883379.meta not finished, retry = 0 2018-07-02 07:34:59,217 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741841_1017 size 791 2018-07-02 07:34:59,217 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741841_1017 size 791 2018-07-02 07:34:59,324 DEBUG [RS:1;asf911:33727] wal.AbstractFSWAL(860): Moved 2 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:59,324 INFO [RS:1;asf911:33727] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C33727%2C1530516865112.meta:.meta(num 1530516883379) 2018-07-02 07:34:59,327 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741839_1015{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW]]} size 906 2018-07-02 07:34:59,327 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 not finished, retry = 0 2018-07-02 07:34:59,327 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=33727] ipc.CallRunner(142): callId: 52 service: AdminService methodName: ExecuteProcedures size: 233 connection: 67.195.81.155:48377 deadline: 1530516959326, exception=org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping 2018-07-02 07:34:59,328 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741839_1015 size 906 2018-07-02 07:34:59,328 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741839_1015 size 906 2018-07-02 07:34:59,328 WARN [RSProcedureDispatcher-pool13-t21] procedure.RSProcedureDispatcher$AbstractRSRemoteCall(212): Failed dispatch to server=asf911.gq1.ygridcore.net,33727,1530516865112 try=0 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,332 WARN [RSProcedureDispatcher-pool13-t21] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:34:59,390 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(108): Waiting for [asf911.gq1.ygridcore.net,33727,1530516865112] to be listed as dead in master 2018-07-02 07:34:59,420 WARN [RS:3;asf911:57468] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:34:59,420 INFO [RS:3;asf911:57468] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C57468%2C1530516898088, suffix=, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:59,460 DEBUG [RS-EventLoopGroup-14-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK] 2018-07-02 07:34:59,460 DEBUG [RS-EventLoopGroup-14-4] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-5924c3e7-0126-4318-ab71-97788504e4c7,DISK] 2018-07-02 07:34:59,460 DEBUG [RS-EventLoopGroup-14-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK] 2018-07-02 07:34:59,461 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C33727%2C1530516865112,1] regionserver.WALEntryStream(222): Reached the end of WAL file 'hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250'. It was not closed cleanly, so we did not parse 8 bytes of data. This is normally ok. 2018-07-02 07:34:59,461 DEBUG [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C33727%2C1530516865112,1] regionserver.WALEntryStream(250): Reached the end of log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 2018-07-02 07:34:59,462 DEBUG [RS:1;asf911:33727] wal.AbstractFSWAL(860): Moved 2 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:34:59,462 INFO [RS:1;asf911:33727] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C33727%2C1530516865112:(num 1530516883250) 2018-07-02 07:34:59,462 DEBUG [RS:1;asf911:33727] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:59,463 INFO [RS:1;asf911:33727] regionserver.Leases(149): Closed leases 2018-07-02 07:34:59,463 INFO [RS:1;asf911:33727] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,33727,1530516865112 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:34:59,463 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:34:59,463 INFO [RS:1;asf911:33727] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:34:59,485 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=33727] ipc.CallRunner(142): callId: 53 service: AdminService methodName: ExecuteProcedures size: 233 connection: 67.195.81.155:48377 deadline: 1530516959485, exception=org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping 2018-07-02 07:34:59,485 WARN [RSProcedureDispatcher-pool13-t22] procedure.RSProcedureDispatcher$AbstractRSRemoteCall(212): Failed dispatch to server=asf911.gq1.ygridcore.net,33727,1530516865112 try=0 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,488 WARN [RSProcedureDispatcher-pool13-t22] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:338) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.sendRequest(RSProcedureDispatcher.java:350) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:314) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$ExecuteProceduresRemoteCall.call(RSProcedureDispatcher.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1491) at org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3649) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28704) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-07-02 07:34:59,557 DEBUG [RS:3;asf911:57468] regionserver.ReplicationSourceManager(773): Start tracking logs for wal group asf911.gq1.ygridcore.net%2C57468%2C1530516898088 for peer 1 2018-07-02 07:34:59,557 INFO [RS:3;asf911:57468] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 2018-07-02 07:34:59,557 DEBUG [RS:3;asf911:57468] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C57468%2C1530516898088 2018-07-02 07:34:59,558 INFO [RS:3;asf911:57468] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:34:59,558 DEBUG [RS:3;asf911:57468] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-5924c3e7-0126-4318-ab71-97788504e4c7,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK]] 2018-07-02 07:34:59,574 INFO [RS:1;asf911:33727] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x44ffd2cc to localhost:59178 2018-07-02 07:34:59,576 DEBUG [RS:1;asf911:33727] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:34:59,576 INFO [RS:1;asf911:33727] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1 terminated 2018-07-02 07:34:59,578 INFO [RS:1;asf911:33727] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:33727 2018-07-02 07:34:59,591 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:59,591 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:59,591 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:59,591 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:34:59,591 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:33727-0x16459e9b450000e, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:59,606 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(868): Not transferring queue since we are shutting down 2018-07-02 07:34:59,624 INFO [RS:1;asf911:33727] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,33727,1530516865112; zookeeper connection closed. 2018-07-02 07:34:59,624 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:34:59,624 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:59,624 INFO [RegionServerTracker-0] master.ServerManager(604): Processing expiration of asf911.gq1.ygridcore.net,33727,1530516865112 on asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:34:59,624 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:59,624 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:59,625 INFO [Time-limited test-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 znode expired, triggering replicatorRemoved event 2018-07-02 07:34:59,624 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5b9df354] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5b9df354 2018-07-02 07:34:59,625 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:59,625 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:59,625 INFO [Thread-1561-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,33727,1530516865112 znode expired, triggering replicatorRemoved event 2018-07-02 07:34:59,625 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:34:59,626 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:59,626 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:34:59,626 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:59,627 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:34:59,643 WARN [RSProcedureDispatcher-pool13-t23] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 is not online at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$DeadRSRemoteCall.call(RSProcedureDispatcher.java:285) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$DeadRSRemoteCall.call(RSProcedureDispatcher.java:276) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:34:59,800 WARN [RSProcedureDispatcher-pool13-t24] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server asf911.gq1.ygridcore.net,33727,1530516865112 is not online at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$DeadRSRemoteCall.call(RSProcedureDispatcher.java:285) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher$DeadRSRemoteCall.call(RSProcedureDispatcher.java:276) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:34:59,807 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(887): Stored pid=30, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true 2018-07-02 07:34:59,807 DEBUG [RegionServerTracker-0] assignment.AssignmentManager(1321): Added=asf911.gq1.ygridcore.net,33727,1530516865112 to dead servers, submitted shutdown handler to be executed meta=true 2018-07-02 07:34:59,808 WARN [RegionServerTracker-0] replication.SyncReplicationReplayWALRemoteProcedure(107): Replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 failed for peer id=1 org.apache.hadoop.hbase.DoNotRetryIOException: server not online asf911.gq1.ygridcore.net,33727,1530516865112 at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.abortPendingOperations(RSProcedureDispatcher.java:130) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.abortPendingOperations(RSProcedureDispatcher.java:60) at org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher$BufferNode.abortOperationsInQueue(RemoteProcedureDispatcher.java:380) at org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher.removeNode(RemoteProcedureDispatcher.java:193) at org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.serverRemoved(RSProcedureDispatcher.java:143) at org.apache.hadoop.hbase.master.ServerManager.expireServer(ServerManager.java:610) at org.apache.hadoop.hbase.master.RegionServerTracker.refresh(RegionServerTracker.java:160) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:34:59,809 INFO [PEWorker-4] procedure.ServerCrashProcedure(118): Start pid=30, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true 2018-07-02 07:34:59,809 WARN [PEWorker-9] replication.SyncReplicationReplayWALRemoteProcedure(162): Can not add remote operation for replay wals [remoteWALs/1-replay/asf911.gq1.ygridcore.net%2C38972%2C1530516853959-1530516886457-1.1530516886462.syncrep] on asf911.gq1.ygridcore.net,33727,1530516865112 for peer id=1, this usually because the server is already dead, retry 2018-07-02 07:34:59,809 INFO [PEWorker-9] procedure2.ProcedureExecutor$WorkerThread(1763): ASSERT pid=29 java.lang.AssertionError: expected to add a child in the front at org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler.doAdd(MasterProcedureScheduler.java:152) at org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler.enqueue(MasterProcedureScheduler.java:133) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.push(AbstractProcedureScheduler.java:115) at org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler.yield(MasterProcedureScheduler.java:120) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1486) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1241) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1761) 2018-07-02 07:34:59,810 WARN [PEWorker-9] procedure2.ProcedureExecutor$WorkerThread(1776): Worker terminating UNNATURALLY null java.lang.AssertionError: expected to add a child in the front at org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler.doAdd(MasterProcedureScheduler.java:152) at org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler.enqueue(MasterProcedureScheduler.java:133) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.push(AbstractProcedureScheduler.java:115) at org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler.yield(MasterProcedureScheduler.java:120) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1486) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1241) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1761) 2018-07-02 07:34:59,974 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(229): Splitting meta WALs pid=30, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true 2018-07-02 07:34:59,976 DEBUG [PEWorker-4] master.MasterWalManager(283): Renamed region directory: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112-splitting 2018-07-02 07:34:59,976 INFO [PEWorker-4] master.SplitLogManager(461): dead splitlog workers [asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:34:59,978 INFO [PEWorker-4] master.SplitLogManager(177): hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112-splitting is empty dir, no logs to split 2018-07-02 07:34:59,978 INFO [PEWorker-4] master.SplitLogManager(241): Started splitting 0 logs in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112-splitting] for [asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:34:59,980 INFO [PEWorker-4] master.SplitLogManager(293): finished splitting (more than or equal to) 0 bytes in 0 log files in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,33727,1530516865112-splitting] in 2ms 2018-07-02 07:34:59,980 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(235): Done splitting meta WALs pid=30, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true 2018-07-02 07:35:00,066 INFO [PEWorker-4] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=31, ppid=30, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740}] 2018-07-02 07:35:00,149 INFO [PEWorker-4] procedure.MasterProcedureScheduler(697): pid=31, ppid=30, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 checking lock on 1588230740 2018-07-02 07:35:00,149 INFO [PEWorker-4] assignment.AssignProcedure(218): Starting pid=31, ppid=30, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=asf911.gq1.ygridcore.net,33727,1530516865112; forceNewPlan=false, retain=true 2018-07-02 07:35:00,302 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 0 retained the pre-restart assignment. 1 regions were assigned to random hosts, since the old hosts for these regions are no longer present in the cluster. These hosts were: 2018-07-02 07:35:00,304 INFO [PEWorker-12] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:00,341 INFO [PEWorker-12] assignment.RegionTransitionProcedure(241): Dispatch pid=31, ppid=30, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:00,390 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(111): Server [asf911.gq1.ygridcore.net,33727,1530516865112] marked as dead, waiting for it to finish dead processing 2018-07-02 07:35:00,390 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(114): Server [asf911.gq1.ygridcore.net,33727,1530516865112] still being processed, waiting 2018-07-02 07:35:00,492 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=38428] regionserver.RSRpcServices(1983): Open hbase:meta,,1.1588230740 2018-07-02 07:35:00,493 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:35:00,496 WARN [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:35:00,496 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C38428%2C1530516865163.meta, suffix=.meta, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:35:00,503 DEBUG [RS-EventLoopGroup-14-9] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK] 2018-07-02 07:35:00,503 DEBUG [RS-EventLoopGroup-14-10] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK] 2018-07-02 07:35:00,503 DEBUG [RS-EventLoopGroup-14-11] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:35:00,506 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.meta.1530516900496.meta 2018-07-02 07:35:00,506 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK]] 2018-07-02 07:35:00,506 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:35:00,507 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(200): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2018-07-02 07:35:00,507 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(8086): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2018-07-02 07:35:00,509 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.RegionCoprocessorHost(394): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2018-07-02 07:35:00,509 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table meta 1588230740 2018-07-02 07:35:00,510 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:35:00,510 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for 1588230740 2018-07-02 07:35:00,510 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for 1588230740 2018-07-02 07:35:00,514 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info 2018-07-02 07:35:00,514 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info 2018-07-02 07:35:00,515 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:00,515 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:00,525 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048 2018-07-02 07:35:00,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:00,527 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:35:00,527 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:35:00,528 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:00,528 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:00,535 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier/a4a715b5bf8d4f2ba86975d15491dfaa 2018-07-02 07:35:00,536 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:00,561 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table 2018-07-02 07:35:00,561 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table 2018-07-02 07:35:00,562 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:00,562 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:00,570 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table/ea61da4dcbf64bd786a9827f6780325e 2018-07-02 07:35:00,570 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:00,571 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for 1588230740 2018-07-02 07:35:00,573 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740 2018-07-02 07:35:00,573 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for 1588230740 2018-07-02 07:35:00,573 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for 1588230740 2018-07-02 07:35:00,575 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for 1588230740 2018-07-02 07:35:00,576 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-07-02 07:35:00,578 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for 1588230740 2018-07-02 07:35:00,578 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(982): Opened 1588230740; next sequenceid=18 2018-07-02 07:35:00,579 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for 1588230740 2018-07-02 07:35:00,582 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:meta,,1.1588230740 2018-07-02 07:35:00,583 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=18, pid=31, ppid=30, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:00,584 DEBUG [PEWorker-11] assignment.RegionTransitionProcedure(354): Finishing pid=31, ppid=30, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:00,584 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:meta,,1.1588230740 2018-07-02 07:35:00,584 INFO [PEWorker-11] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:00,586 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:meta,,1.1588230740 on asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:00,671 INFO [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(387): Atomically moving asf911.gq1.ygridcore.net,43014,1530516865056/1's WALs to asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:00,699 INFO [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(402): Removed empty asf911.gq1.ygridcore.net,43014,1530516865056/1 2018-07-02 07:35:00,800 INFO [PEWorker-11] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=30, state=RUNNABLE:SERVER_CRASH_GET_REGIONS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true; resume parent processing. 2018-07-02 07:35:00,800 INFO [PEWorker-11] procedure2.ProcedureExecutor(1266): Finished pid=31, ppid=30, state=SUCCESS; AssignProcedure table=hbase:meta, region=1588230740 in 541msec 2018-07-02 07:35:00,887 DEBUG [PEWorker-10] procedure.ServerCrashProcedure(239): Splitting WALs pid=30, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true 2018-07-02 07:35:00,889 INFO [PEWorker-10] master.MasterWalManager(285): Log dir for server asf911.gq1.ygridcore.net,33727,1530516865112 does not exist 2018-07-02 07:35:00,889 INFO [PEWorker-10] master.SplitLogManager(461): dead splitlog workers [asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:35:00,889 INFO [PEWorker-10] master.SplitLogManager(241): Started splitting 0 logs in [] for [asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:35:00,889 INFO [PEWorker-10] master.SplitLogManager(293): finished splitting (more than or equal to) 0 bytes in 0 log files in [] in 0ms 2018-07-02 07:35:00,891 DEBUG [PEWorker-10] procedure.ServerCrashProcedure(247): Done splitting WALs pid=30, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true 2018-07-02 07:35:00,966 INFO [PEWorker-10] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=32, ppid=30, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba}] 2018-07-02 07:35:00,992 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:35:01,043 INFO [PEWorker-10] procedure.MasterProcedureScheduler(697): pid=32, ppid=30, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba checking lock on d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,050 DEBUG [RS-EventLoopGroup-13-27] ipc.FailedServers(56): Added failed server with address asf911.gq1.ygridcore.net/67.195.81.155:33727 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: asf911.gq1.ygridcore.net/67.195.81.155:33727 2018-07-02 07:35:01,160 INFO [RS-EventLoopGroup-13-29] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:35630, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:35:01,179 INFO [PEWorker-10] assignment.AssignProcedure(218): Starting pid=32, ppid=30, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OFFLINE, location=asf911.gq1.ygridcore.net,33727,1530516865112; forceNewPlan=false, retain=true 2018-07-02 07:35:01,330 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 0 retained the pre-restart assignment. 1 regions were assigned to random hosts, since the old hosts for these regions are no longer present in the cluster. These hosts were: 2018-07-02 07:35:01,330 INFO [PEWorker-16] assignment.RegionStateStore(199): pid=32 updating hbase:meta row=d1a74048f8e137b8647beefb747aafba, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:01,335 INFO [PEWorker-16] assignment.RegionTransitionProcedure(241): Dispatch pid=32, ppid=30, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:01,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:35:01,390 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(114): Server [asf911.gq1.ygridcore.net,33727,1530516865112] still being processed, waiting 2018-07-02 07:35:01,487 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=38428] regionserver.RSRpcServices(1983): Open hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:01,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => d1a74048f8e137b8647beefb747aafba, NAME => 'hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:35:01,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table namespace d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:35:01,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,496 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info 2018-07-02 07:35:01,497 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info 2018-07-02 07:35:01,497 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:01,497 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:01,509 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info/bc91ddc16ad54a6d9efa5b724ba1622f 2018-07-02 07:35:01,509 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:01,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,513 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,516 INFO [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened d1a74048f8e137b8647beefb747aafba; next sequenceid=10 2018-07-02 07:35:01,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:01,519 INFO [PostOpenDeployTasks:d1a74048f8e137b8647beefb747aafba] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:01,520 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=10, pid=32, ppid=30, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:01,520 DEBUG [PEWorker-1] assignment.RegionTransitionProcedure(354): Finishing pid=32, ppid=30, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OPENING, location=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:01,520 DEBUG [PostOpenDeployTasks:d1a74048f8e137b8647beefb747aafba] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:01,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. on asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:01,522 INFO [PEWorker-1] assignment.RegionStateStore(199): pid=32 updating hbase:meta row=d1a74048f8e137b8647beefb747aafba, regionState=OPEN, openSeqNum=10, regionLocation=asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:01,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=30, state=RUNNABLE:SERVER_CRASH_HANDLE_RIT2; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true; resume parent processing. 2018-07-02 07:35:01,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1266): Finished pid=32, ppid=30, state=SUCCESS; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba in 558msec 2018-07-02 07:35:01,841 INFO [PEWorker-3] procedure2.ProcedureExecutor(1266): Finished pid=30, state=SUCCESS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,33727,1530516865112, splitWal=true, meta=true in 2.1360sec 2018-07-02 07:35:02,040 INFO [asf911:38428Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C38428%2C1530516865163]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 at position: 408 2018-07-02 07:35:02,057 DEBUG [ReplicationExecutor-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,43014,1530516865056 already deleted, retry=false 2018-07-02 07:35:02,391 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(117): Server [asf911.gq1.ygridcore.net,33727,1530516865112] done with server shutdown processing 2018-07-02 07:35:02,417 INFO [Thread-1561] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:35:02,417 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:35:02,417 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:35:02,417 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:35:02,417 INFO [Thread-1561] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:35:02,417 INFO [Thread-1561] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:35:02,418 INFO [Thread-1561] ipc.NettyRpcServer(110): Bind to /67.195.81.155:46345 2018-07-02 07:35:02,419 INFO [Thread-1561] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:02,419 INFO [Thread-1561] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:02,420 INFO [Thread-1561] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:35:02,421 INFO [Thread-1561] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:35:02,423 INFO [Thread-1561] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:46345 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:35:02,441 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:463450x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:35:02,442 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(543): regionserver:46345-0x16459e9b4500039 connected 2018-07-02 07:35:02,443 DEBUG [Thread-1561] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/master 2018-07-02 07:35:02,444 DEBUG [Thread-1561] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/running 2018-07-02 07:35:02,446 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46345 2018-07-02 07:35:02,446 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=46345 2018-07-02 07:35:02,447 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46345 2018-07-02 07:35:02,452 INFO [RS:4;asf911:46345] regionserver.HRegionServer(874): ClusterId : 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:35:02,452 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:35:02,458 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:35:02,458 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:35:02,466 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:35:02,468 INFO [RS:4;asf911:46345] zookeeper.ReadOnlyZKClient(139): Connect 0x39c21844 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:35:02,474 DEBUG [RS:4;asf911:46345] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3705f10c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:35:02,475 DEBUG [RS:4;asf911:46345] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@333a7334, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:35:02,475 DEBUG [RS:4;asf911:46345] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:4;asf911:46345 2018-07-02 07:35:02,475 INFO [RS:4;asf911:46345] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:35:02,475 INFO [RS:4;asf911:46345] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:35:02,476 INFO [RS:4;asf911:46345] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=46345, startcode=1530516902414 2018-07-02 07:35:02,478 INFO [RS-EventLoopGroup-9-7] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:55717, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:35:02,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,480 DEBUG [RS:4;asf911:46345] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 2018-07-02 07:35:02,480 DEBUG [RS:4;asf911:46345] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:42386 2018-07-02 07:35:02,480 DEBUG [RS:4;asf911:46345] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:35:02,490 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:02,490 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:02,490 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:02,491 DEBUG [RS:4;asf911:46345] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,491 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,46345,1530516902414] 2018-07-02 07:35:02,491 WARN [RS:4;asf911:46345] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:35:02,491 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:02,491 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:02,491 INFO [RS:4;asf911:46345] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:35:02,492 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:02,492 DEBUG [RS:4;asf911:46345] regionserver.HRegionServer(1815): logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,492 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:02,492 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,492 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,527 DEBUG [RS:4;asf911:46345] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:02,528 DEBUG [RS:4;asf911:46345] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:02,528 DEBUG [RS:4;asf911:46345] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,529 DEBUG [RS:4;asf911:46345] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:35:02,530 INFO [RS:4;asf911:46345] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:35:02,532 INFO [RS:4;asf911:46345] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:35:02,532 INFO [RS:4;asf911:46345] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:35:02,533 INFO [RS:4;asf911:46345] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:35:02,536 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:02,537 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:35:02,538 DEBUG [RS:4;asf911:46345] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:35:02,557 INFO [RS:4;asf911:46345] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:35:02,557 INFO [SplitLogWorker-asf911:46345] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,46345,1530516902414 starting 2018-07-02 07:35:02,576 INFO [RS:4;asf911:46345] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:35:02,579 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager(257): Current list of replicators: [asf911.gq1.ygridcore.net,33727,1530516865112, asf911.gq1.ygridcore.net,38428,1530516865163, asf911.gq1.ygridcore.net,57468,1530516898088] other RSs: [asf911.gq1.ygridcore.net,38428,1530516865163, asf911.gq1.ygridcore.net,57468,1530516898088, asf911.gq1.ygridcore.net,46345,1530516902414] 2018-07-02 07:35:02,600 INFO [RS:4;asf911:46345] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,46345,1530516902414, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:46345, sessionid=0x16459e9b4500039 2018-07-02 07:35:02,600 INFO [Thread-1561] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,38428,1530516865163' ***** 2018-07-02 07:35:02,601 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:35:02,601 DEBUG [RS:4;asf911:46345] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,601 DEBUG [RS:4;asf911:46345] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,46345,1530516902414' 2018-07-02 07:35:02,601 DEBUG [RS:4;asf911:46345] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/flush-table-proc/abort' 2018-07-02 07:35:02,601 INFO [Thread-1561] regionserver.HRegionServer(2168): STOPPED: Stop RS for test 2018-07-02 07:35:02,601 INFO [RS:2;asf911:38428] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:35:02,601 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(108): Waiting for [asf911.gq1.ygridcore.net,38428,1530516865163] to be listed as dead in master 2018-07-02 07:35:02,601 INFO [RS:2;asf911:38428] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:35:02,601 INFO [SplitLogWorker-asf911:38428] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:35:02,602 INFO [SplitLogWorker-asf911:38428] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,38428,1530516865163 exiting 2018-07-02 07:35:02,604 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:35:02,604 INFO [RS:2;asf911:38428] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:35:02,604 DEBUG [RS:4;asf911:46345] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/flush-table-proc/acquired' 2018-07-02 07:35:02,604 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:35:02,604 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:35:02,604 INFO [RS:2;asf911:38428] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:35:02,605 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:35:02,605 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:35:02,605 DEBUG [RS:4;asf911:46345] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,605 DEBUG [RS:4;asf911:46345] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,46345,1530516902414' 2018-07-02 07:35:02,605 DEBUG [RS:4;asf911:46345] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/online-snapshot/abort' 2018-07-02 07:35:02,605 INFO [RS:2;asf911:38428] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:02,607 DEBUG [RS:2;asf911:38428] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:35:02,606 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing 0f545ce4fc7475df98047cbbbf56ffee, disabling compactions & flushes 2018-07-02 07:35:02,608 DEBUG [RS:4;asf911:46345] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/online-snapshot/acquired' 2018-07-02 07:35:02,608 INFO [RS:2;asf911:38428] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x0caccc47 to localhost:59178 2018-07-02 07:35:02,608 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1527): Closing d1a74048f8e137b8647beefb747aafba, disabling compactions & flushes 2018-07-02 07:35:02,608 DEBUG [RS:2;asf911:38428] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:35:02,608 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:35:02,609 INFO [RS:2;asf911:38428] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:35:02,608 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1567): Updates disabled for region hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:02,608 DEBUG [RS:4;asf911:46345] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:35:02,609 INFO [RS:2;asf911:38428] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:35:02,609 INFO [RS:2;asf911:38428] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:35:02,609 INFO [RS:4;asf911:46345] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:35:02,609 INFO [RS:4;asf911:46345] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:35:02,610 INFO [RS:2;asf911:38428] regionserver.HRegionServer(1399): Waiting on 3 regions to close 2018-07-02 07:35:02,611 DEBUG [RS:2;asf911:38428] regionserver.HRegionServer(1403): Online Regions={0f545ce4fc7475df98047cbbbf56ffee=SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee., 1588230740=hbase:meta,,1.1588230740, d1a74048f8e137b8647beefb747aafba=hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.} 2018-07-02 07:35:02,612 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing 1588230740, disabling compactions & flushes 2018-07-02 07:35:02,614 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region hbase:meta,,1.1588230740 2018-07-02 07:35:02,614 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(2584): Flushing 3/3 column families, dataSize=956 B heapSize=1.52 KB 2018-07-02 07:35:02,621 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-07-02 07:35:02,622 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2018-07-02 07:35:02,629 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:35:02,629 INFO [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1681): Closed hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:02,630 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:35:02,630 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] handler.CloseRegionHandler(124): Closed hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:02,644 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:35:02,644 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:35:02,644 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|FINALIZED], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:35:02,646 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=956 B at sequenceid=22 (bloomFilter=false), to=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/info/b8da5a0d66424038a0c38772e2f357c5 2018-07-02 07:35:02,649 INFO [RS:4;asf911:46345.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x31331b88 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:35:02,654 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/info/b8da5a0d66424038a0c38772e2f357c5 as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/b8da5a0d66424038a0c38772e2f357c5 2018-07-02 07:35:02,658 DEBUG [RS:4;asf911:46345.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@408040e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:35:02,658 INFO [RS:4;asf911:46345.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:35:02,662 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HStore(1070): Added hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/b8da5a0d66424038a0c38772e2f357c5, entries=8, sequenceid=22, filesize=5.6 K 2018-07-02 07:35:02,665 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(2793): Finished flush of dataSize ~956 B/956, heapSize ~1.76 KB/1800, currentSize=0 B/0 for 1588230740 in 51ms, sequenceid=22, compaction requested=false 2018-07-02 07:35:02,665 DEBUG [RS:4;asf911:46345.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:35:02,667 DEBUG [RS:4;asf911:46345.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b450003c connected 2018-07-02 07:35:02,669 INFO [RS:4;asf911:46345.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:35:02,673 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:35:02,680 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=17 2018-07-02 07:35:02,681 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-07-02 07:35:02,684 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed hbase:meta,,1.1588230740 2018-07-02 07:35:02,684 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed hbase:meta,,1.1588230740 2018-07-02 07:35:02,814 INFO [RS:2;asf911:38428] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,38428,1530516865163; all regions closed. 2018-07-02 07:35:02,817 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741848_1024{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:35:02,817 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741848_1024{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:35:02,819 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741848_1024 size 2115 2018-07-02 07:35:02,822 DEBUG [RS:2;asf911:38428] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:35:02,822 INFO [RS:2;asf911:38428] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C38428%2C1530516865163.meta:.meta(num 1530516900496) 2018-07-02 07:35:02,825 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:35:02,826 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:35:02,826 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:35:02,830 DEBUG [RS:2;asf911:38428] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:35:02,830 INFO [RS:2;asf911:38428] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C38428%2C1530516865163:(num 1530516883251) 2018-07-02 07:35:02,830 DEBUG [RS:2;asf911:38428] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:35:02,830 INFO [RS:2;asf911:38428] regionserver.Leases(149): Closed leases 2018-07-02 07:35:02,830 INFO [RS:2;asf911:38428] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,38428,1530516865163 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:35:02,831 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:35:02,831 INFO [RS:2;asf911:38428] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:35:02,898 INFO [RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C38428%2C1530516865163,1] regionserver.WALEntryStream(321): Log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 was moved to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 2018-07-02 07:35:02,943 INFO [RS:2;asf911:38428] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x5b3869c3 to localhost:59178 2018-07-02 07:35:02,944 DEBUG [RS:2;asf911:38428] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:35:02,945 INFO [RS:2;asf911:38428] regionserver.ReplicationSource(527): ReplicationSourceWorker RS_REFRESH_PEER-regionserver/asf911:0-0.replicationSource,1.replicationSource.shipperasf911.gq1.ygridcore.net%2C38428%2C1530516865163,1 terminated 2018-07-02 07:35:02,946 INFO [RS:2;asf911:38428] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:38428 2018-07-02 07:35:02,960 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:02,960 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:02,960 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:02,963 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:02,964 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:38428-0x16459e9b450000f, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:02,965 INFO [RS:2;asf911:38428] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,38428,1530516865163; zookeeper connection closed. 2018-07-02 07:35:02,965 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:02,965 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:02,966 INFO [RegionServerTracker-0] master.ServerManager(604): Processing expiration of asf911.gq1.ygridcore.net,38428,1530516865163 on asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:35:02,966 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@302aac6c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@302aac6c 2018-07-02 07:35:02,966 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:02,966 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,966 INFO [Thread-1561-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 znode expired, triggering replicatorRemoved event 2018-07-02 07:35:02,966 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,966 INFO [Thread-1561-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,38428,1530516865163 znode expired, triggering replicatorRemoved event 2018-07-02 07:35:02,966 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:02,967 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:02,967 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:02,968 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:02,968 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:02,968 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,124 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(887): Stored pid=33, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true 2018-07-02 07:35:03,124 DEBUG [RegionServerTracker-0] assignment.AssignmentManager(1321): Added=asf911.gq1.ygridcore.net,38428,1530516865163 to dead servers, submitted shutdown handler to be executed meta=true 2018-07-02 07:35:03,126 INFO [PEWorker-6] procedure.ServerCrashProcedure(118): Start pid=33, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true 2018-07-02 07:35:03,201 DEBUG [PEWorker-7] procedure.ServerCrashProcedure(229): Splitting meta WALs pid=33, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true 2018-07-02 07:35:03,203 DEBUG [PEWorker-7] master.MasterWalManager(283): Renamed region directory: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163-splitting 2018-07-02 07:35:03,203 INFO [PEWorker-7] master.SplitLogManager(461): dead splitlog workers [asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:03,205 INFO [PEWorker-7] master.SplitLogManager(177): hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163-splitting is empty dir, no logs to split 2018-07-02 07:35:03,205 INFO [PEWorker-7] master.SplitLogManager(241): Started splitting 0 logs in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163-splitting] for [asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:03,207 INFO [PEWorker-7] master.SplitLogManager(293): finished splitting (more than or equal to) 0 bytes in 0 log files in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,38428,1530516865163-splitting] in 2ms 2018-07-02 07:35:03,207 DEBUG [PEWorker-7] procedure.ServerCrashProcedure(235): Done splitting meta WALs pid=33, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true 2018-07-02 07:35:03,260 INFO [PEWorker-7] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=34, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740}] 2018-07-02 07:35:03,341 INFO [PEWorker-8] procedure.MasterProcedureScheduler(697): pid=34, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 checking lock on 1588230740 2018-07-02 07:35:03,341 INFO [PEWorker-8] assignment.AssignProcedure(218): Starting pid=34, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=asf911.gq1.ygridcore.net,38428,1530516865163; forceNewPlan=false, retain=true 2018-07-02 07:35:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: -1 2018-07-02 07:35:03,492 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 0 retained the pre-restart assignment. 1 regions were assigned to random hosts, since the old hosts for these regions are no longer present in the cluster. These hosts were: 2018-07-02 07:35:03,493 INFO [PEWorker-4] assignment.AssignProcedure(246): Early suspend! pid=34, ppid=33, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,602 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(111): Server [asf911.gq1.ygridcore.net,38428,1530516865163] marked as dead, waiting for it to finish dead processing 2018-07-02 07:35:03,602 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(114): Server [asf911.gq1.ygridcore.net,38428,1530516865163] still being processed, waiting 2018-07-02 07:35:03,613 WARN [RS:4;asf911:46345] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:35:03,613 INFO [RS:4;asf911:46345] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C46345%2C1530516902414, suffix=, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:35:03,628 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK] 2018-07-02 07:35:03,628 DEBUG [RS-EventLoopGroup-15-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK] 2018-07-02 07:35:03,629 DEBUG [RS-EventLoopGroup-15-4] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:35:03,673 DEBUG [RS:4;asf911:46345] regionserver.ReplicationSourceManager(773): Start tracking logs for wal group asf911.gq1.ygridcore.net%2C46345%2C1530516902414 for peer 1 2018-07-02 07:35:03,674 INFO [RS:4;asf911:46345] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 2018-07-02 07:35:03,674 DEBUG [RS:4;asf911:46345] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C46345%2C1530516902414 2018-07-02 07:35:03,675 INFO [RS:4;asf911:46345] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:35:03,675 DEBUG [RS:4;asf911:46345] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38320,DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK], DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK]] 2018-07-02 07:35:03,678 INFO [PEWorker-12] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,683 INFO [PEWorker-12] assignment.RegionTransitionProcedure(241): Dispatch pid=34, ppid=33, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,836 DEBUG [RSProcedureDispatcher-pool13-t27] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,839 INFO [RS-EventLoopGroup-15-9] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:34103, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:35:03,840 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=46345] regionserver.RSRpcServices(1983): Open hbase:meta,,1.1588230740 2018-07-02 07:35:03,840 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:35:03,844 WARN [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:35:03,844 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C46345%2C1530516902414.meta, suffix=.meta, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:35:03,852 DEBUG [RS-EventLoopGroup-15-10] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK] 2018-07-02 07:35:03,852 DEBUG [RS-EventLoopGroup-15-11] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:35:03,852 DEBUG [RS-EventLoopGroup-15-12] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK] 2018-07-02 07:35:03,854 INFO [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.meta.1530516903844.meta 2018-07-02 07:35:03,855 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:51748,DS-38565b32-54b2-419a-97c3-f65c173a0df3,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK], DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK]] 2018-07-02 07:35:03,855 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:35:03,856 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(200): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2018-07-02 07:35:03,856 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(8086): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2018-07-02 07:35:03,856 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.RegionCoprocessorHost(394): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2018-07-02 07:35:03,856 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table meta 1588230740 2018-07-02 07:35:03,856 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:35:03,857 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for 1588230740 2018-07-02 07:35:03,857 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for 1588230740 2018-07-02 07:35:03,861 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info 2018-07-02 07:35:03,861 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info 2018-07-02 07:35:03,862 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:03,862 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:03,872 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/b8da5a0d66424038a0c38772e2f357c5 2018-07-02 07:35:03,877 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048 2018-07-02 07:35:03,877 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:03,879 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:35:03,880 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier 2018-07-02 07:35:03,880 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:03,880 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:03,889 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier/a4a715b5bf8d4f2ba86975d15491dfaa 2018-07-02 07:35:03,890 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:03,891 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table 2018-07-02 07:35:03,891 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table 2018-07-02 07:35:03,892 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:03,892 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:03,899 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/table/ea61da4dcbf64bd786a9827f6780325e 2018-07-02 07:35:03,899 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:03,899 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for 1588230740 2018-07-02 07:35:03,902 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740 2018-07-02 07:35:03,902 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for 1588230740 2018-07-02 07:35:03,902 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for 1588230740 2018-07-02 07:35:03,903 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for 1588230740 2018-07-02 07:35:03,905 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-07-02 07:35:03,906 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for 1588230740 2018-07-02 07:35:03,907 INFO [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(982): Opened 1588230740; next sequenceid=26 2018-07-02 07:35:03,907 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for 1588230740 2018-07-02 07:35:03,910 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:meta,,1.1588230740 2018-07-02 07:35:03,918 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=26, pid=34, ppid=33, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,918 DEBUG [PEWorker-11] assignment.RegionTransitionProcedure(354): Finishing pid=34, ppid=33, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,918 INFO [PEWorker-11] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:03,918 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:meta,,1.1588230740 2018-07-02 07:35:03,921 DEBUG [RS_OPEN_META-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:meta,,1.1588230740 on asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:04,107 INFO [PEWorker-11] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=33, state=RUNNABLE:SERVER_CRASH_GET_REGIONS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true; resume parent processing. 2018-07-02 07:35:04,108 INFO [PEWorker-11] procedure2.ProcedureExecutor(1266): Finished pid=34, ppid=33, state=SUCCESS; AssignProcedure table=hbase:meta, region=1588230740 in 692msec 2018-07-02 07:35:04,231 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(239): Splitting WALs pid=33, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true 2018-07-02 07:35:04,233 INFO [PEWorker-2] master.MasterWalManager(285): Log dir for server asf911.gq1.ygridcore.net,38428,1530516865163 does not exist 2018-07-02 07:35:04,233 INFO [PEWorker-2] master.SplitLogManager(461): dead splitlog workers [asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:04,234 INFO [PEWorker-2] master.SplitLogManager(241): Started splitting 0 logs in [] for [asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:04,234 INFO [PEWorker-2] master.SplitLogManager(293): finished splitting (more than or equal to) 0 bytes in 0 log files in [] in 0ms 2018-07-02 07:35:04,234 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(247): Done splitting WALs pid=33, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true 2018-07-02 07:35:04,259 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(868): Not transferring queue since we are shutting down 2018-07-02 07:35:04,314 INFO [PEWorker-13] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=35, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba}, {pid=36, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee}] 2018-07-02 07:35:04,382 INFO [PEWorker-10] procedure.MasterProcedureScheduler(697): pid=35, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba checking lock on d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,383 INFO [PEWorker-16] procedure.MasterProcedureScheduler(697): pid=36, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee checking lock on 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,385 DEBUG [RS-EventLoopGroup-13-30] ipc.FailedServers(56): Added failed server with address asf911.gq1.ygridcore.net/67.195.81.155:38428 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: asf911.gq1.ygridcore.net/67.195.81.155:38428 2018-07-02 07:35:04,492 INFO [RS-EventLoopGroup-15-16] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:34137, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:35:04,496 INFO [PEWorker-10] assignment.AssignProcedure(218): Starting pid=35, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OFFLINE, location=asf911.gq1.ygridcore.net,38428,1530516865163; forceNewPlan=false, retain=true 2018-07-02 07:35:04,496 INFO [PEWorker-16] assignment.AssignProcedure(218): Starting pid=36, ppid=33, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee; rit=OFFLINE, location=asf911.gq1.ygridcore.net,38428,1530516865163; forceNewPlan=false, retain=true 2018-07-02 07:35:04,602 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(114): Server [asf911.gq1.ygridcore.net,38428,1530516865163] still being processed, waiting 2018-07-02 07:35:04,646 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 2 regions. 0 retained the pre-restart assignment. 2 regions were assigned to random hosts, since the old hosts for these regions are no longer present in the cluster. These hosts were: 2018-07-02 07:35:04,649 INFO [PEWorker-1] assignment.RegionStateStore(199): pid=35 updating hbase:meta row=d1a74048f8e137b8647beefb747aafba, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,649 INFO [PEWorker-5] assignment.RegionStateStore(199): pid=36 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,658 INFO [PEWorker-1] assignment.RegionTransitionProcedure(241): Dispatch pid=35, ppid=33, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OPENING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,658 INFO [PEWorker-5] assignment.RegionTransitionProcedure(241): Dispatch pid=36, ppid=33, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee; rit=OPENING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,809 DEBUG [RSProcedureDispatcher-pool13-t28] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,811 INFO [RS-EventLoopGroup-14-15] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:59175, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:35:04,812 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=57468] regionserver.RSRpcServices(1983): Open SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:35:04,816 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=57468] regionserver.RSRpcServices(1983): Open hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:04,816 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => 0f545ce4fc7475df98047cbbbf56ffee, NAME => 'SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:35:04,817 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table SyncRep 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,817 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:35:04,817 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,817 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => d1a74048f8e137b8647beefb747aafba, NAME => 'hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:35:04,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table namespace d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:35:04,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,821 DEBUG [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/cf 2018-07-02 07:35:04,821 DEBUG [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/cf 2018-07-02 07:35:04,822 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] hfile.CacheConfig(239): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:04,823 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:04,824 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] regionserver.HStore(327): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:04,824 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,826 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,826 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,826 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,827 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,831 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,832 INFO [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened 0f545ce4fc7475df98047cbbbf56ffee; next sequenceid=5 2018-07-02 07:35:04,832 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:35:04,847 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info 2018-07-02 07:35:04,848 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info 2018-07-02 07:35:04,848 INFO [PostOpenDeployTasks:0f545ce4fc7475df98047cbbbf56ffee] regionserver.HRegionServer(2193): Post open deploy tasks for SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:35:04,852 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:04,853 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:35:04,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=5, pid=36, ppid=33, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee; rit=OPENING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,854 DEBUG [PostOpenDeployTasks:0f545ce4fc7475df98047cbbbf56ffee] regionserver.HRegionServer(2217): Finished post open deploy task for SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:35:04,854 DEBUG [PEWorker-3] assignment.RegionTransitionProcedure(354): Finishing pid=36, ppid=33, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee; rit=OPENING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,861 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. on asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,862 INFO [PEWorker-3] assignment.RegionStateStore(199): pid=36 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=OPEN, repBarrier=5, openSeqNum=5, regionLocation=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,873 DEBUG [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] regionserver.HStore(581): loaded hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/info/bc91ddc16ad54a6d9efa5b724ba1622f 2018-07-02 07:35:04,873 INFO [StoreOpener-d1a74048f8e137b8647beefb747aafba-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:35:04,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,878 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,879 INFO [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened d1a74048f8e137b8647beefb747aafba; next sequenceid=13 2018-07-02 07:35:04,879 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for d1a74048f8e137b8647beefb747aafba 2018-07-02 07:35:04,880 INFO [PostOpenDeployTasks:d1a74048f8e137b8647beefb747aafba] regionserver.HRegionServer(2193): Post open deploy tasks for hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:04,881 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=13, pid=35, ppid=33, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OPENING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,882 DEBUG [PEWorker-6] assignment.RegionTransitionProcedure(354): Finishing pid=35, ppid=33, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba; rit=OPENING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,882 INFO [PEWorker-6] assignment.RegionStateStore(199): pid=35 updating hbase:meta row=d1a74048f8e137b8647beefb747aafba, regionState=OPEN, openSeqNum=13, regionLocation=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,882 DEBUG [PostOpenDeployTasks:d1a74048f8e137b8647beefb747aafba] regionserver.HRegionServer(2217): Finished post open deploy task for hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:35:04,885 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. on asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:04,924 INFO [PEWorker-16] procedure2.ProcedureExecutor(1266): Finished pid=36, ppid=33, state=SUCCESS; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee in 550msec 2018-07-02 07:35:05,104 INFO [PEWorker-3] procedure2.ProcedureExecutor(1266): Finished pid=36, ppid=33, state=SUCCESS; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee in 550msec 2018-07-02 07:35:05,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:35:05,371 INFO [PEWorker-6] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=33, state=RUNNABLE:SERVER_CRASH_HANDLE_RIT2; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true; resume parent processing. 2018-07-02 07:35:05,372 INFO [PEWorker-6] procedure2.ProcedureExecutor(1266): Finished pid=35, ppid=33, state=SUCCESS; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba in 573msec 2018-07-02 07:35:05,472 INFO [PEWorker-14] procedure2.ProcedureExecutor(1266): Finished pid=33, state=SUCCESS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,38428,1530516865163, splitWal=true, meta=true in 2.4550sec 2018-07-02 07:35:05,602 DEBUG [Thread-1561] replication.TestSyncReplicationStandbyKillRS(117): Server [asf911.gq1.ygridcore.net,38428,1530516865163] done with server shutdown processing 2018-07-02 07:35:05,751 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:35:05,751 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:35:05,759 INFO [Thread-1561] client.ConnectionUtils(122): regionserver/asf911:0 server-side Connection retries=45 2018-07-02 07:35:05,759 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-07-02 07:35:05,759 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=60, handlerCount=6 2018-07-02 07:35:05,759 INFO [Thread-1561] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-07-02 07:35:05,759 INFO [Thread-1561] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-07-02 07:35:05,760 INFO [Thread-1561] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-07-02 07:35:05,761 INFO [Thread-1561] ipc.NettyRpcServer(110): Bind to /67.195.81.155:40536 2018-07-02 07:35:05,761 INFO [Thread-1561] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:05,762 INFO [Thread-1561] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=1, currentSize=748.48 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=748.48 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:35:05,763 INFO [Thread-1561] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:35:05,765 INFO [Thread-1561] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-07-02 07:35:05,766 INFO [Thread-1561] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:40536 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:35:05,782 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:405360x0, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:35:05,784 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(543): regionserver:40536-0x16459e9b450003d connected 2018-07-02 07:35:05,784 DEBUG [Thread-1561] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/master 2018-07-02 07:35:05,785 DEBUG [Thread-1561] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/running 2018-07-02 07:35:05,787 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40536 2018-07-02 07:35:05,789 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=40536 2018-07-02 07:35:05,789 DEBUG [Thread-1561] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40536 2018-07-02 07:35:05,794 INFO [RS:5;asf911:40536] regionserver.HRegionServer(874): ClusterId : 4453c2bd-27e1-4723-9c16-c1873c79d2e4 2018-07-02 07:35:05,794 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-07-02 07:35:05,800 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-07-02 07:35:05,800 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-07-02 07:35:05,807 INFO [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(387): Atomically moving asf911.gq1.ygridcore.net,33727,1530516865112/1's WALs to asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:05,815 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-07-02 07:35:05,817 INFO [RS:5;asf911:40536] zookeeper.ReadOnlyZKClient(139): Connect 0x6b02adc8 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:35:05,818 DEBUG [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(414): Creating asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 with data PBUF\x08\xC8\x02 2018-07-02 07:35:05,824 INFO [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(426): Atomically moved asf911.gq1.ygridcore.net,33727,1530516865112/1's WALs to asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:05,833 DEBUG [RS:5;asf911:40536] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65c54ed8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:35:05,833 DEBUG [RS:5;asf911:40536] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61fac6b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf911.gq1.ygridcore.net/67.195.81.155:0 2018-07-02 07:35:05,833 DEBUG [RS:5;asf911:40536] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:5;asf911:40536 2018-07-02 07:35:05,834 INFO [RS:5;asf911:40536] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-07-02 07:35:05,834 INFO [RS:5;asf911:40536] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-07-02 07:35:05,834 INFO [RS:5;asf911:40536] regionserver.HRegionServer(2605): reportForDuty to master=asf911.gq1.ygridcore.net,44014,1530516864901 with port=40536, startcode=1530516905630 2018-07-02 07:35:05,836 INFO [RS-EventLoopGroup-9-8] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:56705, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2018-07-02 07:35:05,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.ServerManager(439): Registering regionserver=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,838 DEBUG [RS:5;asf911:40536] regionserver.HRegionServer(1505): Config from master: hbase.rootdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950 2018-07-02 07:35:05,838 DEBUG [RS:5;asf911:40536] regionserver.HRegionServer(1505): Config from master: fs.defaultFS=hdfs://localhost:42386 2018-07-02 07:35:05,838 DEBUG [RS:5;asf911:40536] regionserver.HRegionServer(1505): Config from master: hbase.master.info.port=-1 2018-07-02 07:35:05,848 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:05,848 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:05,848 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:35:05,848 DEBUG [RS:5;asf911:40536] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,849 WARN [RS:5;asf911:40536] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-07-02 07:35:05,849 INFO [RS:5;asf911:40536] wal.WALFactory(136): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-07-02 07:35:05,849 DEBUG [RS:5;asf911:40536] regionserver.HRegionServer(1815): logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,849 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:05,849 INFO [RegionServerTracker-0] master.RegionServerTracker(170): RegionServer ephemeral node created, adding [asf911.gq1.ygridcore.net,40536,1530516905630] 2018-07-02 07:35:05,849 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:05,850 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,850 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,850 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:05,853 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:05,882 DEBUG [RS:5;asf911:40536] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:35:05,883 DEBUG [RS:5;asf911:40536] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,883 DEBUG [RS:5;asf911:40536] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:05,884 DEBUG [RS:5;asf911:40536] regionserver.Replication(144): Replication stats-in-log period=5 seconds 2018-07-02 07:35:05,885 INFO [RS:5;asf911:40536] regionserver.MetricsRegionServerWrapperImpl(145): Computing regionserver metrics every 5000 milliseconds 2018-07-02 07:35:05,887 INFO [RS:5;asf911:40536] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-07-02 07:35:05,887 INFO [RS:5;asf911:40536] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-07-02 07:35:05,888 INFO [RS:5;asf911:40536] regionserver.HRegionServer$CompactionChecker(1706): CompactionChecker runs every PT0.1S 2018-07-02 07:35:05,891 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:05,892 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:35:05,892 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:05,892 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:05,892 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:35:05,892 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:35:05,892 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0, corePoolSize=10, maxPoolSize=10 2018-07-02 07:35:05,893 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf911:0, corePoolSize=3, maxPoolSize=3 2018-07-02 07:35:05,893 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf911:0, corePoolSize=2, maxPoolSize=2 2018-07-02 07:35:05,893 DEBUG [RS:5;asf911:40536] executor.ExecutorService(92): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/asf911:0, corePoolSize=1, maxPoolSize=1 2018-07-02 07:35:05,914 INFO [RS:5;asf911:40536] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-07-02 07:35:05,914 INFO [SplitLogWorker-asf911:40536] regionserver.SplitLogWorker(211): SplitLogWorker asf911.gq1.ygridcore.net,40536,1530516905630 starting 2018-07-02 07:35:05,940 INFO [RS:5;asf911:40536] regionserver.ReplicationSource(178): queueId=1, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:35:05,947 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager(257): Current list of replicators: [asf911.gq1.ygridcore.net,33727,1530516865112, asf911.gq1.ygridcore.net,38428,1530516865163, asf911.gq1.ygridcore.net,57468,1530516898088, asf911.gq1.ygridcore.net,46345,1530516902414] other RSs: [asf911.gq1.ygridcore.net,57468,1530516898088, asf911.gq1.ygridcore.net,40536,1530516905630, asf911.gq1.ygridcore.net,46345,1530516902414] 2018-07-02 07:35:05,971 INFO [RS:5;asf911:40536] regionserver.HRegionServer(1546): Serving as asf911.gq1.ygridcore.net,40536,1530516905630, RpcServer on asf911.gq1.ygridcore.net/67.195.81.155:40536, sessionid=0x16459e9b450003d 2018-07-02 07:35:05,972 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-07-02 07:35:05,972 DEBUG [RS:5;asf911:40536] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,972 DEBUG [RS:5;asf911:40536] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,40536,1530516905630' 2018-07-02 07:35:05,973 DEBUG [RS:5;asf911:40536] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/flush-table-proc/abort' 2018-07-02 07:35:05,973 DEBUG [RS:5;asf911:40536] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/flush-table-proc/acquired' 2018-07-02 07:35:05,974 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-07-02 07:35:05,974 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-07-02 07:35:05,974 DEBUG [RS:5;asf911:40536] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:35:05,974 DEBUG [RS:5;asf911:40536] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf911.gq1.ygridcore.net,40536,1530516905630' 2018-07-02 07:35:05,974 DEBUG [RS:5;asf911:40536] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/cluster2/online-snapshot/abort' 2018-07-02 07:35:05,974 DEBUG [RS:5;asf911:40536] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/cluster2/online-snapshot/acquired' 2018-07-02 07:35:05,975 DEBUG [RS:5;asf911:40536] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-07-02 07:35:05,975 INFO [RS:5;asf911:40536] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-07-02 07:35:05,975 INFO [RS:5;asf911:40536] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-07-02 07:35:05,996 INFO [RS:5;asf911:40536.replicationSource,1] zookeeper.ReadOnlyZKClient(139): Connect 0x13c02235 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:35:06,008 DEBUG [RS:5;asf911:40536.replicationSource,1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@749c076e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:35:06,009 INFO [RS:5;asf911:40536.replicationSource,1] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:35:06,015 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:35:06,015 DEBUG [RS:5;asf911:40536.replicationSource,1-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:35:06,017 DEBUG [RS:5;asf911:40536.replicationSource,1-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500040 connected 2018-07-02 07:35:06,020 INFO [RS:5;asf911:40536.replicationSource,1] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:35:06,979 WARN [RS:5;asf911:40536] wal.AbstractFSWAL(419): 'hbase.regionserver.maxlogs' was deprecated. 2018-07-02 07:35:06,980 INFO [RS:5;asf911:40536] wal.AbstractFSWAL(424): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf911.gq1.ygridcore.net%2C40536%2C1530516905630, suffix=, logDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630, archiveDir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:35:06,993 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK] 2018-07-02 07:35:06,993 DEBUG [RS-EventLoopGroup-16-4] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK] 2018-07-02 07:35:06,993 DEBUG [RS-EventLoopGroup-16-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(737): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK] 2018-07-02 07:35:07,065 DEBUG [RS:5;asf911:40536] regionserver.ReplicationSourceManager(773): Start tracking logs for wal group asf911.gq1.ygridcore.net%2C40536%2C1530516905630 for peer 1 2018-07-02 07:35:07,066 INFO [RS:5;asf911:40536] wal.AbstractFSWAL(686): New WAL /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 2018-07-02 07:35:07,066 DEBUG [RS:5;asf911:40536] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C40536%2C1530516905630 2018-07-02 07:35:07,066 INFO [RS:5;asf911:40536] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:35:07,067 DEBUG [RS:5;asf911:40536] wal.AbstractFSWAL(775): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38320,DS-c02e3dde-4ee5-4268-849e-c97455f318a6,DISK], DatanodeInfoWithStorage[127.0.0.1:51748,DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82,DISK], DatanodeInfoWithStorage[127.0.0.1:49540,DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8,DISK]] 2018-07-02 07:35:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:07,840 DEBUG [ReplicationExecutor-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112 already deleted, retry=false 2018-07-02 07:35:07,841 DEBUG [ReplicationExecutor-0] replication.ReplicationQueueInfo(110): Found dead servers:[asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:35:07,842 DEBUG [ReplicationExecutor-0] replication.ReplicationQueueInfo(110): Found dead servers:[asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:35:07,874 DEBUG [ReplicationExecutor-0] replication.ReplicationQueueInfo(110): Found dead servers:[asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:35:07,874 INFO [ReplicationExecutor-0] regionserver.ReplicationSource(178): queueId=1-asf911.gq1.ygridcore.net,33727,1530516865112, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:35:07,910 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112] zookeeper.ReadOnlyZKClient(139): Connect 0x6bf6815f to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:35:07,925 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@7990d3bf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:35:07,926 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:35:07,932 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:35:07,934 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500042 connected 2018-07-02 07:35:07,935 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:35:07,935 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C33727%2C1530516865112 2018-07-02 07:35:07,938 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1-asf911.gq1.ygridcore.net,33727,1530516865112, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:35:07,946 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C33727%2C1530516865112,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.WALEntryStream(250): Reached the end of log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250 2018-07-02 07:35:07,948 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1-asf911.gq1.ygridcore.net,33727,1530516865112] replication.ReplicationQueueInfo(110): Found dead servers:[asf911.gq1.ygridcore.net,33727,1530516865112] 2018-07-02 07:35:07,957 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSourceManager(693): Removing 1 logs in the list: [asf911.gq1.ygridcore.net%2C33727%2C1530516865112.1530516883250] 2018-07-02 07:35:07,957 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSourceManager(707): Removing 0 logs from remote dir hdfs://localhost:38505/user/jenkins/test-data/1137c3d2-c249-7965-6f0d-109656cbd370/remoteWALs in the list: [] 2018-07-02 07:35:07,990 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSourceShipper(124): Finished recovering queue for group asf911.gq1.ygridcore.net%2C33727%2C1530516865112 of peer 1-asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:35:07,992 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSourceManager(526): Done with the recovered queue 1-asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:35:08,024 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,57468,1530516898088/1-asf911.gq1.ygridcore.net,33727,1530516865112 2018-07-02 07:35:08,024 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112.replicationSource.shipperasf911.gq1.ygridcore.net%2C33727%2C1530516865112,1-asf911.gq1.ygridcore.net,33727,1530516865112] regionserver.ReplicationSourceManager(539): Finished recovering queue 1-asf911.gq1.ygridcore.net,33727,1530516865112 with the following stats: Total replicated edits: 0, current progress: 2018-07-02 07:35:08,181 DEBUG [ReplicationExecutor-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,33727,1530516865112 already deleted, retry=false 2018-07-02 07:35:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:08,532 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:35:09,495 INFO [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(387): Atomically moving asf911.gq1.ygridcore.net,38428,1530516865163/1's WALs to asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:09,508 DEBUG [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(414): Creating asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 with data PBUF\x08\xC2\x07 2018-07-02 07:35:09,537 INFO [ReplicationExecutor-0] replication.ZKReplicationQueueStorage(426): Atomically moved asf911.gq1.ygridcore.net,38428,1530516865163/1's WALs to asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:35:10,318 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:35:10,318 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:35:10,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:11,557 DEBUG [ReplicationExecutor-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163 already deleted, retry=false 2018-07-02 07:35:11,557 DEBUG [ReplicationExecutor-0] replication.ReplicationQueueInfo(110): Found dead servers:[asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:11,557 DEBUG [ReplicationExecutor-0] replication.ReplicationQueueInfo(110): Found dead servers:[asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:11,578 DEBUG [ReplicationExecutor-0] replication.ReplicationQueueInfo(110): Found dead servers:[asf911.gq1.ygridcore.net,38428,1530516865163] 2018-07-02 07:35:11,579 INFO [ReplicationExecutor-0] regionserver.ReplicationSource(178): queueId=1-asf911.gq1.ygridcore.net,38428,1530516865163, ReplicationSource : 1, currentBandwidth=0 2018-07-02 07:35:11,619 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163] zookeeper.ReadOnlyZKClient(139): Connect 0x4bc7bd55 to localhost:59178 with session timeout=90000ms, retries 1, retry interval 10ms, keepAlive=60000ms 2018-07-02 07:35:11,625 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@3923a807, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-07-02 07:35:11,625 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163] zookeeper.RecoverableZooKeeper(106): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:59178 2018-07-02 07:35:11,632 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163-EventThread] zookeeper.ZKWatcher(478): connection to cluster: 10x0, quorum=localhost:59178, baseZNode=/cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-07-02 07:35:11,635 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163-EventThread] zookeeper.ZKWatcher(543): connection to cluster: 1-0x16459e9b4500044 connected 2018-07-02 07:35:11,635 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163] regionserver.ReplicationSource(448): Replicating 4453c2bd-27e1-4723-9c16-c1873c79d2e4 -> 62bd510b-3b5c-46d2-af05-cbc0179a0f7b 2018-07-02 07:35:11,636 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163] regionserver.ReplicationSource(305): Starting up worker for wal group asf911.gq1.ygridcore.net%2C38428%2C1530516865163 2018-07-02 07:35:11,637 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163] regionserver.ReplicationSourceWALReader(114): peerClusterZnode=1-asf911.gq1.ygridcore.net,38428,1530516865163, ReplicationSourceWALReaderThread : 1 inited, replicationBatchSizeCapacity=102400, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2018-07-02 07:35:11,645 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C38428%2C1530516865163,1-asf911.gq1.ygridcore.net,38428,1530516865163] regionserver.WALEntryStream(250): Reached the end of log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 2018-07-02 07:35:11,648 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163.replicationSource.shipperasf911.gq1.ygridcore.net%2C38428%2C1530516865163,1-asf911.gq1.ygridcore.net,38428,1530516865163] regionserver.ReplicationSourceShipper(124): Finished recovering queue for group asf911.gq1.ygridcore.net%2C38428%2C1530516865163 of peer 1-asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:11,649 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163.replicationSource.shipperasf911.gq1.ygridcore.net%2C38428%2C1530516865163,1-asf911.gq1.ygridcore.net,38428,1530516865163] regionserver.ReplicationSourceManager(526): Done with the recovered queue 1-asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:11,650 DEBUG [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163.replicationSource.shipperasf911.gq1.ygridcore.net%2C38428%2C1530516865163,1-asf911.gq1.ygridcore.net,38428,1530516865163] zookeeper.ZKUtil(355): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/replication/rs/asf911.gq1.ygridcore.net,46345,1530516902414/1-asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 2018-07-02 07:35:11,657 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,46345,1530516902414/1-asf911.gq1.ygridcore.net,38428,1530516865163/asf911.gq1.ygridcore.net%2C38428%2C1530516865163.1530516883251 2018-07-02 07:35:11,657 INFO [ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163.replicationSource.shipperasf911.gq1.ygridcore.net%2C38428%2C1530516865163,1-asf911.gq1.ygridcore.net,38428,1530516865163] regionserver.ReplicationSourceManager(539): Finished recovering queue 1-asf911.gq1.ygridcore.net,38428,1530516865163 with the following stats: Total replicated edits: 0, current progress: 2018-07-02 07:35:11,657 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,46345,1530516902414/1-asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:11,659 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/replication/rs/asf911.gq1.ygridcore.net,46345,1530516902414/1-asf911.gq1.ygridcore.net,38428,1530516865163 2018-07-02 07:35:11,965 DEBUG [ReplicationExecutor-0] zookeeper.RecoverableZooKeeper(176): Node /cluster2/replication/rs/asf911.gq1.ygridcore.net,38428,1530516865163 already deleted, retry=false 2018-07-02 07:35:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:13,495 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:35:15,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:35:15,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:20,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:25,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:35:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:30,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:35,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:35:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:40,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:45,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:35:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:50,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:52,125 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:35:52,125 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:35:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:35:55,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:35:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:35:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:05,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:36:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:10,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:15,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:20,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:25,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:36:25,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:30,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:40,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:45,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:36:45,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:50,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:36:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:36:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:36:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:00,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:05,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:37:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:10,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:15,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:20,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:22,601 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:25,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:37:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:30,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:31,250 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:37:31,250 INFO [regionserver/asf911:0.Chore.3] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:37:31,251 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:37:31,252 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:37:32,601 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:40,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:45,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:37:45,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:50,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:37:55,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:37:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:37:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:00,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:05,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:38:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:10,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:15,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:20,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:22,601 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:25,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:38:25,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:30,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:40,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:45,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:38:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:47,601 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:50,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:38:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:38:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:38:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:39:00,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:39:02,601 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:39:05,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:39:05,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:39:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:39:10,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:39:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:39:14,092 DEBUG [Thread-158-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(427): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-07-02 07:39:14,092 DEBUG [Thread-158-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(427): data stats (chunk size=2097152): current pool size=7, created chunk count=9, reused chunk count=3, reuseRatio=25.00% 2018-07-02 07:39:15,418 WARN [snapshot-hfile-cleaner-cache-refresher] snapshot.SnapshotFileCache$RefreshCacheTask(315): Failed to refresh snapshot hfile cache! java.net.ConnectException: Call From asf911.gq1.ygridcore.net/67.195.81.155 to localhost:38505 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy27.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:211) at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.access$000(SnapshotFileCache.java:79) at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask.run(SnapshotFileCache.java:313) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 25 more 2018-07-02 07:39:15,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:39:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:39:20,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:39:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:39:23,922 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:39:25,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:39:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: -1 2018-07-02 07:39:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:28,249 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(176): Chore: CompactionChecker missed its start time 2018-07-02 07:39:28,250 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(176): Chore: MemstoreFlusherChore missed its start time 2018-07-02 07:39:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 671 2018-07-02 07:39:29,262 INFO [RS-EventLoopGroup-15-17] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:49203, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:39:29,273 DEBUG [master/asf911:0.Chore.1] balancer.StochasticLoadBalancer(297): RegionReplicaHostCostFunction not needed 2018-07-02 07:39:29,273 DEBUG [master/asf911:0.Chore.1] balancer.StochasticLoadBalancer(297): RegionReplicaRackCostFunction not needed 2018-07-02 07:39:29,274 INFO [master/asf911:0.Chore.1] balancer.StochasticLoadBalancer(377): start StochasticLoadBalancer.balancer, initCost=285.0, functionCost=RegionCountSkewCostFunction : (500.0, 0.5); PrimaryRegionCountSkewCostFunction : (500.0, 0.0); MoveCostFunction : (7.0, 0.0); ServerLocalityCostFunction : (25.0, 0.0); RackLocalityCostFunction : (15.0, 0.0); TableSkewCostFunction : (35.0, 1.0); RegionReplicaHostCostFunction : (100000.0, 0.0); RegionReplicaRackCostFunction : (10000.0, 0.0); ReadRequestCostFunction : (5.0, 0.0); CPRequestCostFunction : (5.0, 0.0); WriteRequestCostFunction : (5.0, 0.0); MemStoreSizeCostFunction : (5.0, 0.0); StoreFileCostFunction : (5.0, 0.0); 2018-07-02 07:39:29,463 DEBUG [master/asf911:0.Chore.1] balancer.StochasticLoadBalancer(437): Finished computing new load balance plan. Computation took 192ms to try 7200 different iterations. Found a solution that moves 1 regions; Going from a computed cost of 285.0 to a new cost of 37.333333333333336 2018-07-02 07:39:29,463 INFO [master/asf911:0.Chore.1] master.HMaster(1542): Balancer plans size is 1, the balance interval is 300000 ms, and the max number regions in transition is 3 2018-07-02 07:39:29,463 INFO [master/asf911:0.Chore.1] master.HMaster(1547): balance hri=0f545ce4fc7475df98047cbbbf56ffee, source=asf911.gq1.ygridcore.net,57468,1530516898088, destination=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:29,660 DEBUG [master/asf911:0.Chore.1] procedure2.ProcedureExecutor(887): Stored pid=37, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure hri=0f545ce4fc7475df98047cbbbf56ffee, source=asf911.gq1.ygridcore.net,57468,1530516898088, destination=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:29,663 INFO [PEWorker-8] procedure.MasterProcedureScheduler(697): pid=37, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure hri=0f545ce4fc7475df98047cbbbf56ffee, source=asf911.gq1.ygridcore.net,57468,1530516898088, destination=asf911.gq1.ygridcore.net,40536,1530516905630 checking lock on 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:29,663 INFO [PEWorker-8] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, server=asf911.gq1.ygridcore.net,57468,1530516898088}] 2018-07-02 07:39:29,745 INFO [PEWorker-4] procedure.MasterProcedureScheduler(697): pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, server=asf911.gq1.ygridcore.net,57468,1530516898088 checking lock on 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:29,746 INFO [PEWorker-4] assignment.RegionStateStore(199): pid=38 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=CLOSING, regionLocation=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:39:29,752 INFO [PEWorker-4] assignment.RegionTransitionProcedure(241): Dispatch pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, server=asf911.gq1.ygridcore.net,57468,1530516898088; rit=CLOSING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:39:29,904 INFO [RS-EventLoopGroup-14-16] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:46014, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:39:29,905 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=57468] regionserver.RSRpcServices(1607): Close 0f545ce4fc7475df98047cbbbf56ffee, moving to asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:29,908 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing 0f545ce4fc7475df98047cbbbf56ffee, disabling compactions & flushes 2018-07-02 07:39:29,908 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:39:29,917 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2018-07-02 07:39:29,919 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:39:29,920 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegionServer(3426): Adding 0f545ce4fc7475df98047cbbbf56ffee move to asf911.gq1.ygridcore.net,40536,1530516905630 record at close sequenceid=5 2018-07-02 07:39:29,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report CLOSED seqId=-1, pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, server=asf911.gq1.ygridcore.net,57468,1530516898088; rit=CLOSING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:39:29,923 DEBUG [PEWorker-12] assignment.RegionTransitionProcedure(354): Finishing pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH; UnassignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, server=asf911.gq1.ygridcore.net,57468,1530516898088; rit=CLOSING, location=asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:39:29,923 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:39:29,923 INFO [PEWorker-12] assignment.RegionStateStore(199): pid=38 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=CLOSED 2018-07-02 07:39:30,063 INFO [PEWorker-12] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=37, state=RUNNABLE:MOVE_REGION_ASSIGN; MoveRegionProcedure hri=0f545ce4fc7475df98047cbbbf56ffee, source=asf911.gq1.ygridcore.net,57468,1530516898088, destination=asf911.gq1.ygridcore.net,40536,1530516905630; resume parent processing. 2018-07-02 07:39:30,063 INFO [PEWorker-11] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,40536,1530516905630}] 2018-07-02 07:39:30,063 INFO [PEWorker-12] procedure2.ProcedureExecutor(1266): Finished pid=38, ppid=37, state=SUCCESS; UnassignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, server=asf911.gq1.ygridcore.net,57468,1530516898088 in 263msec 2018-07-02 07:39:30,138 INFO [PEWorker-11] procedure.MasterProcedureScheduler(697): pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,40536,1530516905630 checking lock on 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,140 INFO [PEWorker-11] assignment.AssignProcedure(218): Starting pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,40536,1530516905630; rit=OFFLINE, location=asf911.gq1.ygridcore.net,40536,1530516905630; forceNewPlan=false, retain=false 2018-07-02 07:39:30,290 INFO [master/asf911:0] balancer.BaseLoadBalancer(1497): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-07-02 07:39:30,293 INFO [PEWorker-2] assignment.RegionStateStore(199): pid=39 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=OPENING, regionLocation=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:30,296 INFO [PEWorker-2] assignment.RegionTransitionProcedure(241): Dispatch pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,40536,1530516905630; rit=OPENING, location=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:30,448 DEBUG [RSProcedureDispatcher-pool13-t30] master.ServerManager(746): New admin connection to asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:30,451 INFO [RS-EventLoopGroup-16-9] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:49703, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-07-02 07:39:30,452 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=40536] regionserver.RSRpcServices(1983): Open SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:39:30,455 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7108): Opening region: {ENCODED => 0f545ce4fc7475df98047cbbbf56ffee, NAME => 'SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.', STARTKEY => '', ENDKEY => ''} 2018-07-02 07:39:30,456 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table SyncRep 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,456 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(829): Instantiated SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-07-02 07:39:30,456 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7148): checking encryption for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,456 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(7153): checking classloading for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,459 DEBUG [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/cf 2018-07-02 07:39:30,459 DEBUG [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] util.CommonFSUtils(565): Set storagePolicy=HOT for path=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/cf 2018-07-02 07:39:30,460 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] hfile.CacheConfig(239): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=3, currentSize=752.91 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=752.91 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-07-02 07:39:30,460 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-07-02 07:39:30,461 INFO [StoreOpener-0f545ce4fc7475df98047cbbbf56ffee-1] regionserver.HStore(327): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-07-02 07:39:30,461 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(925): replaying wal for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,463 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(4489): Found 0 recovered edits file(s) under hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,463 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(933): stopping wal replay for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,463 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(945): Cleaning up temporary data for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,464 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(956): Cleaning up detritus for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,465 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(978): writing seq id for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,466 INFO [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(982): Opened 0f545ce4fc7475df98047cbbbf56ffee; next sequenceid=8 2018-07-02 07:39:30,466 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] regionserver.HRegion(989): Running coprocessor post-open hooks for 0f545ce4fc7475df98047cbbbf56ffee 2018-07-02 07:39:30,468 INFO [PostOpenDeployTasks:0f545ce4fc7475df98047cbbbf56ffee] regionserver.HRegionServer(2193): Post open deploy tasks for SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:39:30,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] assignment.RegionTransitionProcedure(264): Received report OPENED seqId=8, pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,40536,1530516905630; rit=OPENING, location=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:30,472 DEBUG [PEWorker-13] assignment.RegionTransitionProcedure(354): Finishing pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,40536,1530516905630; rit=OPENING, location=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:30,472 DEBUG [PostOpenDeployTasks:0f545ce4fc7475df98047cbbbf56ffee] regionserver.HRegionServer(2217): Finished post open deploy task for SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:39:30,472 INFO [PEWorker-13] assignment.RegionStateStore(199): pid=39 updating hbase:meta row=0f545ce4fc7475df98047cbbbf56ffee, regionState=OPEN, repBarrier=8, openSeqNum=8, regionLocation=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:30,475 DEBUG [RS_OPEN_REGION-regionserver/asf911:0-0] handler.OpenRegionHandler(128): Opened SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. on asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:39:30,686 INFO [PEWorker-13] procedure2.ProcedureExecutor(1635): Finished subprocedure(s) of pid=37, state=RUNNABLE; MoveRegionProcedure hri=0f545ce4fc7475df98047cbbbf56ffee, source=asf911.gq1.ygridcore.net,57468,1530516898088, destination=asf911.gq1.ygridcore.net,40536,1530516905630; resume parent processing. 2018-07-02 07:39:30,687 INFO [PEWorker-13] procedure2.ProcedureExecutor(1266): Finished pid=39, ppid=37, state=SUCCESS; AssignProcedure table=SyncRep, region=0f545ce4fc7475df98047cbbbf56ffee, target=asf911.gq1.ygridcore.net,40536,1530516905630 in 414msec 2018-07-02 07:39:30,765 INFO [PEWorker-10] procedure2.ProcedureExecutor(1266): Finished pid=37, state=SUCCESS; MoveRegionProcedure hri=0f545ce4fc7475df98047cbbbf56ffee, source=asf911.gq1.ygridcore.net,57468,1530516898088, destination=asf911.gq1.ygridcore.net,40536,1530516905630 in 1.2050sec 2018-07-02 07:39:30,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:39:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:39:35,425 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:39:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:39:36,010 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-07-02 07:39:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:39:40,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:39:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:43,472 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:39:45,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:39:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:39:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:39:50,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:39:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:39:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:39:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:39:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:00,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:04,757 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 17518ms 2018-07-02 07:40:04,858 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 84442ms 2018-07-02 07:40:04,951 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 12791ms 2018-07-02 07:40:05,050 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 90390ms 2018-07-02 07:40:05,159 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 160017ms 2018-07-02 07:40:05,256 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 56508ms 2018-07-02 07:40:05,350 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 13821ms 2018-07-02 07:40:05,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:40:05,450 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 34362ms 2018-07-02 07:40:05,557 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 26829ms 2018-07-02 07:40:05,650 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 243537ms 2018-07-02 07:40:05,750 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 171176ms 2018-07-02 07:40:05,860 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 235084ms 2018-07-02 07:40:05,958 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 286065ms 2018-07-02 07:40:05,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:06,050 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 122990ms 2018-07-02 07:40:06,151 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 219070ms 2018-07-02 07:40:06,250 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 294874ms 2018-07-02 07:40:06,351 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 2331ms 2018-07-02 07:40:06,452 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 228182ms 2018-07-02 07:40:06,550 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 5611ms 2018-07-02 07:40:06,650 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 226932ms 2018-07-02 07:40:06,751 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 130735ms 2018-07-02 07:40:06,858 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 256576ms 2018-07-02 07:40:06,957 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 43428ms 2018-07-02 07:40:07,058 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 160187ms 2018-07-02 07:40:07,154 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 69444ms 2018-07-02 07:40:07,250 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 139072ms 2018-07-02 07:40:07,359 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 98312ms 2018-07-02 07:40:07,457 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 230273ms 2018-07-02 07:40:07,552 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 82073ms 2018-07-02 07:40:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:07,651 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 69300ms 2018-07-02 07:40:07,751 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 102979ms 2018-07-02 07:40:07,858 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 147778ms 2018-07-02 07:40:07,950 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 80517ms 2018-07-02 07:40:08,050 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 277549ms 2018-07-02 07:40:08,152 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 225324ms 2018-07-02 07:40:08,251 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 199358ms 2018-07-02 07:40:08,350 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 224364ms 2018-07-02 07:40:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:08,450 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 37528ms 2018-07-02 07:40:08,550 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 137582ms 2018-07-02 07:40:08,650 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 293747ms 2018-07-02 07:40:08,752 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 125517ms 2018-07-02 07:40:08,851 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 217320ms 2018-07-02 07:40:08,951 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 211556ms 2018-07-02 07:40:09,051 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 195283ms 2018-07-02 07:40:09,151 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 75288ms 2018-07-02 07:40:09,259 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 266978ms 2018-07-02 07:40:09,350 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 194269ms 2018-07-02 07:40:09,459 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 77383ms 2018-07-02 07:40:09,551 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 199869ms 2018-07-02 07:40:09,655 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 102469ms 2018-07-02 07:40:09,759 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 152704ms 2018-07-02 07:40:09,859 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 99680ms 2018-07-02 07:40:09,955 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 274208ms 2018-07-02 07:40:10,055 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 17748ms 2018-07-02 07:40:10,151 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 40489ms 2018-07-02 07:40:10,253 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 37589ms 2018-07-02 07:40:10,355 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 109480ms 2018-07-02 07:40:10,455 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 202881ms 2018-07-02 07:40:10,550 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 103920ms 2018-07-02 07:40:10,658 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 61310ms 2018-07-02 07:40:10,759 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 163959ms 2018-07-02 07:40:10,850 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 56503ms 2018-07-02 07:40:10,950 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 153867ms 2018-07-02 07:40:10,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:11,050 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 96236ms 2018-07-02 07:40:11,150 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 121043ms 2018-07-02 07:40:11,259 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 250692ms 2018-07-02 07:40:11,350 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 151850ms 2018-07-02 07:40:11,450 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 8066ms 2018-07-02 07:40:11,553 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 166243ms 2018-07-02 07:40:11,657 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 110433ms 2018-07-02 07:40:11,750 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 235552ms 2018-07-02 07:40:11,854 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 21396ms 2018-07-02 07:40:11,950 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 76854ms 2018-07-02 07:40:12,050 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 45088ms 2018-07-02 07:40:12,150 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 150433ms 2018-07-02 07:40:12,251 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 22427ms 2018-07-02 07:40:12,357 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 10076ms 2018-07-02 07:40:12,451 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 3569ms 2018-07-02 07:40:12,550 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 75975ms 2018-07-02 07:40:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:12,650 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 62643ms 2018-07-02 07:40:12,760 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 236024ms 2018-07-02 07:40:12,850 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 247089ms 2018-07-02 07:40:12,956 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 101103ms 2018-07-02 07:40:13,056 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 294257ms 2018-07-02 07:40:13,159 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 248845ms 2018-07-02 07:40:13,259 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 187055ms 2018-07-02 07:40:13,351 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 229120ms 2018-07-02 07:40:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:13,454 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 31651ms 2018-07-02 07:40:13,550 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 68694ms 2018-07-02 07:40:13,658 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 47420ms 2018-07-02 07:40:13,757 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 144843ms 2018-07-02 07:40:13,855 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 237254ms 2018-07-02 07:40:13,951 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 228914ms 2018-07-02 07:40:14,050 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 298223ms 2018-07-02 07:40:14,150 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 82030ms 2018-07-02 07:40:14,251 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 38334ms 2018-07-02 07:40:14,354 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 48700ms 2018-07-02 07:40:14,452 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 174834ms 2018-07-02 07:40:14,551 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 254995ms 2018-07-02 07:40:14,658 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 41590ms 2018-07-02 07:40:14,756 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 225048ms 2018-07-02 07:40:14,854 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 170540ms 2018-07-02 07:40:14,950 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 236470ms 2018-07-02 07:40:15,052 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 287796ms 2018-07-02 07:40:15,150 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 141802ms 2018-07-02 07:40:15,251 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 253868ms 2018-07-02 07:40:15,350 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 118915ms 2018-07-02 07:40:15,450 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 194456ms 2018-07-02 07:40:15,550 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 33856ms 2018-07-02 07:40:15,657 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 184718ms 2018-07-02 07:40:15,751 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 273373ms 2018-07-02 07:40:15,853 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 169041ms 2018-07-02 07:40:15,957 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 40752ms 2018-07-02 07:40:15,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:16,052 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 289837ms 2018-07-02 07:40:16,150 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 34172ms 2018-07-02 07:40:16,250 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 49948ms 2018-07-02 07:40:16,351 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 72028ms 2018-07-02 07:40:16,459 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 146918ms 2018-07-02 07:40:16,550 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 222734ms 2018-07-02 07:40:16,655 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 266426ms 2018-07-02 07:40:16,751 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 292401ms 2018-07-02 07:40:16,851 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 222149ms 2018-07-02 07:40:16,954 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 27006ms 2018-07-02 07:40:17,050 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 33516ms 2018-07-02 07:40:17,150 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 52528ms 2018-07-02 07:40:17,251 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 286223ms 2018-07-02 07:40:17,360 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 45265ms 2018-07-02 07:40:17,456 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 126542ms 2018-07-02 07:40:17,557 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 245891ms 2018-07-02 07:40:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:17,650 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 168043ms 2018-07-02 07:40:17,751 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 179661ms 2018-07-02 07:40:17,851 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 8333ms 2018-07-02 07:40:17,951 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 244945ms 2018-07-02 07:40:18,055 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 202253ms 2018-07-02 07:40:18,158 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 80244ms 2018-07-02 07:40:18,250 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 212614ms 2018-07-02 07:40:18,356 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 284621ms 2018-07-02 07:40:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:18,450 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 294015ms 2018-07-02 07:40:18,558 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 162734ms 2018-07-02 07:40:18,650 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 196317ms 2018-07-02 07:40:18,754 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 234473ms 2018-07-02 07:40:18,856 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 126702ms 2018-07-02 07:40:18,950 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 32006ms 2018-07-02 07:40:19,050 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 268822ms 2018-07-02 07:40:19,150 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 152200ms 2018-07-02 07:40:19,250 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 249839ms 2018-07-02 07:40:19,350 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 98923ms 2018-07-02 07:40:19,455 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 166004ms 2018-07-02 07:40:19,554 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 64240ms 2018-07-02 07:40:19,651 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 237421ms 2018-07-02 07:40:19,750 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 45622ms 2018-07-02 07:40:19,851 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 263500ms 2018-07-02 07:40:19,957 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 96830ms 2018-07-02 07:40:20,054 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 134353ms 2018-07-02 07:40:20,157 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 85099ms 2018-07-02 07:40:20,258 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 95621ms 2018-07-02 07:40:20,351 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 44692ms 2018-07-02 07:40:20,459 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 108780ms 2018-07-02 07:40:20,558 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 189240ms 2018-07-02 07:40:20,651 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 29550ms 2018-07-02 07:40:20,754 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 275406ms 2018-07-02 07:40:20,860 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 102915ms 2018-07-02 07:40:20,951 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 191322ms 2018-07-02 07:40:20,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:21,051 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 105787ms 2018-07-02 07:40:21,153 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 138162ms 2018-07-02 07:40:21,250 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 69143ms 2018-07-02 07:40:21,350 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 252863ms 2018-07-02 07:40:21,452 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 250590ms 2018-07-02 07:40:21,550 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 26405ms 2018-07-02 07:40:21,650 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 29519ms 2018-07-02 07:40:21,756 INFO [regionserver/asf911:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 36871ms 2018-07-02 07:40:21,851 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 228510ms 2018-07-02 07:40:21,950 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 66746ms 2018-07-02 07:40:22,050 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 208069ms 2018-07-02 07:40:22,159 INFO [regionserver/asf911:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 88548ms 2018-07-02 07:40:22,251 INFO [regionserver/asf911:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1775): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 4503ms 2018-07-02 07:40:22,280 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(70): Since none of the CFs were above the size, flushing all. 2018-07-02 07:40:22,280 INFO [MemStoreFlusher.0] regionserver.HRegion(2584): Flushing 3/3 column families, dataSize=3.45 KB heapSize=5.73 KB 2018-07-02 07:40:22,308 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741853_1029{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW]]} size 7417 2018-07-02 07:40:22,308 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741853_1029 size 7417 2018-07-02 07:40:22,308 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741853_1029 size 7417 2018-07-02 07:40:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:22,709 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=3.23 KB at sequenceid=36 (bloomFilter=false), to=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/info/19737756d9cb47e7ae3e12d566ba29f0 2018-07-02 07:40:22,734 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:40:22,734 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:40:22,734 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|FINALIZED], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:40:22,735 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=222 B at sequenceid=36 (bloomFilter=false), to=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/rep_barrier/3dd5ebb036214f4994baf000cffe5047 2018-07-02 07:40:22,741 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/info/19737756d9cb47e7ae3e12d566ba29f0 as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/19737756d9cb47e7ae3e12d566ba29f0 2018-07-02 07:40:22,748 INFO [MemStoreFlusher.0] regionserver.HStore(1070): Added hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/19737756d9cb47e7ae3e12d566ba29f0, entries=23, sequenceid=36, filesize=7.2 K 2018-07-02 07:40:22,749 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/rep_barrier/3dd5ebb036214f4994baf000cffe5047 as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier/3dd5ebb036214f4994baf000cffe5047 2018-07-02 07:40:22,754 INFO [MemStoreFlusher.0] regionserver.HStore(1070): Added hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/rep_barrier/3dd5ebb036214f4994baf000cffe5047, entries=2, sequenceid=36, filesize=4.9 K 2018-07-02 07:40:22,756 INFO [MemStoreFlusher.0] regionserver.HRegion(2793): Finished flush of dataSize ~3.45 KB/3533, heapSize ~6.20 KB/6352, currentSize=0 B/0 for 1588230740 in 476ms, sequenceid=36, compaction requested=true 2018-07-02 07:40:22,761 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(350): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2018-07-02 07:40:22,761 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(350): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2018-07-02 07:40:22,762 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(350): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2018-07-02 07:40:22,762 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] compactions.SortedCompactionPolicy(68): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2018-07-02 07:40:22,762 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] compactions.SortedCompactionPolicy(68): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2018-07-02 07:40:22,808 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] compactions.ExploringCompactionPolicy(121): Exploring compaction algorithm has selected 0 files of size 0 starting at candidate #0 after considering 0 permutations with 0 in ratio 2018-07-02 07:40:22,808 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] compactions.SortedCompactionPolicy(240): Not compacting files because we only have 0 files ready for compaction. Need 3 to initiate. 2018-07-02 07:40:22,809 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] compactions.ExploringCompactionPolicy(121): Exploring compaction algorithm has selected 3 files of size 20215 starting at candidate #20215 after considering 1 permutations with 1 in ratio 2018-07-02 07:40:22,811 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] regionserver.CompactSplit(375): Not compacting hbase:meta,,1.1588230740 because compaction request was cancelled 2018-07-02 07:40:22,811 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] regionserver.HStore(1805): 1588230740 - info: Initiating minor compaction (all files) 2018-07-02 07:40:22,811 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] compactions.SortedCompactionPolicy(68): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2018-07-02 07:40:22,812 INFO [RS:4;asf911:46345-shortCompactions-1530517222761] regionserver.HRegion(2127): Starting compaction of info in hbase:meta,,1.1588230740 2018-07-02 07:40:22,812 INFO [RS:4;asf911:46345-shortCompactions-1530517222761] regionserver.HStore(1398): Starting compaction of [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048, hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/b8da5a0d66424038a0c38772e2f357c5, hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/19737756d9cb47e7ae3e12d566ba29f0] into tmpdir=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp, totalSize=19.7 K 2018-07-02 07:40:22,812 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] compactions.ExploringCompactionPolicy(121): Exploring compaction algorithm has selected 0 files of size 0 starting at candidate #0 after considering 0 permutations with 0 in ratio 2018-07-02 07:40:22,813 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] compactions.SortedCompactionPolicy(240): Not compacting files because we only have 0 files ready for compaction. Need 3 to initiate. 2018-07-02 07:40:22,813 DEBUG [RS:4;asf911:46345-longCompactions-1530516902532] regionserver.CompactSplit(375): Not compacting hbase:meta,,1.1588230740 because compaction request was cancelled 2018-07-02 07:40:22,814 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] compactions.Compactor(202): Compacting d9abe0ce89514b5299447b7098ab8048, keycount=20, bloomtype=NONE, size=6.9 K, encoding=NONE, seqNum=14, earliestPutTs=1530516869379 2018-07-02 07:40:22,815 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] compactions.Compactor(202): Compacting b8da5a0d66424038a0c38772e2f357c5, keycount=8, bloomtype=NONE, size=5.6 K, encoding=NONE, seqNum=22, earliestPutTs=1530516901330 2018-07-02 07:40:22,815 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] compactions.Compactor(202): Compacting 19737756d9cb47e7ae3e12d566ba29f0, keycount=23, bloomtype=NONE, size=7.2 K, encoding=NONE, seqNum=36, earliestPutTs=1530516904649 2018-07-02 07:40:22,838 INFO [RS:4;asf911:46345-shortCompactions-1530517222761] throttle.PressureAwareThroughputController(153): 1588230740#info#compaction#12 average throughput is 1.32 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 10.00 MB/second 2018-07-02 07:40:22,850 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED]]} size 0 2018-07-02 07:40:22,850 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED]]} size 0 2018-07-02 07:40:22,850 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|FINALIZED], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|FINALIZED], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|FINALIZED]]} size 0 2018-07-02 07:40:22,857 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/.tmp/info/077e84e9a5fe45ff8d2cf75615c41400 as hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/077e84e9a5fe45ff8d2cf75615c41400 2018-07-02 07:40:22,872 INFO [RS:4;asf911:46345-shortCompactions-1530517222761] regionserver.HStore(1569): Completed compaction of 3 (all) file(s) in info of 1588230740 into 077e84e9a5fe45ff8d2cf75615c41400(size=8.7 K), total size for store is 8.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2018-07-02 07:40:22,872 INFO [RS:4;asf911:46345-shortCompactions-1530517222761] regionserver.CompactSplit$CompactionRunner(594): Completed compaction region=hbase:meta,,1.1588230740, storeName=info, priority=13, startTime=1530517222761; duration=0sec 2018-07-02 07:40:22,874 DEBUG [RS:4;asf911:46345-shortCompactions-1530517222761] regionserver.CompactSplit$CompactionRunner(622): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2018-07-02 07:40:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:25,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:40:25,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:30,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:40,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:45,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:40:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:50,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:40:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:40:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:40:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:02,543 DEBUG [RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-6] regionserver.HStore(2620): Moving the files [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048, hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/b8da5a0d66424038a0c38772e2f357c5, hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/19737756d9cb47e7ae3e12d566ba29f0] to archive 2018-07-02 07:41:02,550 DEBUG [RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-6] backup.HFileArchiver(256): Archiving compacted files. 2018-07-02 07:41:02,557 DEBUG [RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-6] backup.HFileArchiver(444): Archived from FileableStoreFile, hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048 to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/archive/data/hbase/meta/1588230740/info/d9abe0ce89514b5299447b7098ab8048 2018-07-02 07:41:02,559 DEBUG [RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-6] backup.HFileArchiver(444): Archived from FileableStoreFile, hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/b8da5a0d66424038a0c38772e2f357c5 to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/archive/data/hbase/meta/1588230740/info/b8da5a0d66424038a0c38772e2f357c5 2018-07-02 07:41:02,561 DEBUG [RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-6] backup.HFileArchiver(444): Archived from FileableStoreFile, hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/info/19737756d9cb47e7ae3e12d566ba29f0 to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/archive/data/hbase/meta/1588230740/info/19737756d9cb47e7ae3e12d566ba29f0 2018-07-02 07:41:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:05,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:41:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:10,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:15,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:20,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:25,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:41:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:30,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:40,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:45,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:41:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:50,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:41:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:41:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:41:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:05,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:42:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:10,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:15,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:20,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:25,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:42:25,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:30,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:40,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:45,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:42:45,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:50,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:42:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:42:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:42:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:05,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:43:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:10,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:15,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:20,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:25,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:43:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:30,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:35,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:40,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:45,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:43:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:50,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:43:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:43:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:43:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:05,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:44:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:10,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:14,091 DEBUG [Thread-158-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(427): data stats (chunk size=2097152): current pool size=9, created chunk count=9, reused chunk count=3, reuseRatio=25.00% 2018-07-02 07:44:14,091 DEBUG [Thread-158-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(427): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-07-02 07:44:15,417 WARN [snapshot-hfile-cleaner-cache-refresher] snapshot.SnapshotFileCache$RefreshCacheTask(315): Failed to refresh snapshot hfile cache! java.net.ConnectException: Call From asf911.gq1.ygridcore.net/67.195.81.155 to localhost:38505 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy27.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:211) at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.access$000(SnapshotFileCache.java:79) at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask.run(SnapshotFileCache.java:313) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 25 more 2018-07-02 07:44:15,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:20,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:23,923 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2018-07-02 07:44:25,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:44:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:30,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:32,680 DEBUG [master/asf911:0.Chore.1] balancer.StochasticLoadBalancer(297): RegionReplicaHostCostFunction not needed 2018-07-02 07:44:32,680 DEBUG [master/asf911:0.Chore.1] balancer.StochasticLoadBalancer(297): RegionReplicaRackCostFunction not needed 2018-07-02 07:44:32,681 ERROR [master/asf911:0.Chore.1] hbase.ScheduledChore(189): Caught error java.lang.NullPointerException at org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:297) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:44:32,681 ERROR [master/asf911:0.Chore.1] hbase.ScheduledChore(189): Caught error java.lang.NullPointerException at org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:297) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:44:32,687 INFO [RS-EventLoopGroup-15-18] ipc.ServerRpcConnection(556): Connection from 67.195.81.155:58432, version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-07-02 07:44:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:40,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:45,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=23 2018-07-02 07:44:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:50,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:44:55,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:44:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:44:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:05,444 ERROR [Time-limited test] replication.TestSyncReplicationStandbyKillRS(93): Failed to transit standby cluster to DOWNGRADE_ACTIVE org.apache.hadoop.hbase.exceptions.TimeoutIOException: java.util.concurrent.TimeoutException: The procedure 23 is still running at org.apache.hadoop.hbase.client.HBaseAdmin.get(HBaseAdmin.java:2156) at org.apache.hadoop.hbase.client.HBaseAdmin.transitReplicationPeerSyncReplicationState(HBaseAdmin.java:4019) at org.apache.hadoop.hbase.replication.TestSyncReplicationStandbyKillRS.testStandbyKillRegionServer(TestSyncReplicationStandbyKillRS.java:90) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.TimeoutException: The procedure 23 is still running at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:3504) at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:3425) at org.apache.hadoop.hbase.client.HBaseAdmin.get(HBaseAdmin.java:2152) ... 24 more 2018-07-02 07:45:05,460 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:06,469 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:07,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:08,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:09,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:10,478 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:10,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:11,480 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:12,483 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:13,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:14,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:15,492 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:15,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:16,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:17,498 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:18,501 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:19,503 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:20,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:20,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:21,507 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:22,511 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:23,513 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:24,515 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:25,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:26,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:27,523 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:28,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:29,528 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:30,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:30,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:31,533 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:32,536 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:33,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:34,540 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:35,542 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:36,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:37,546 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:38,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:39,550 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:40,552 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:40,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:41,554 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:42,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:43,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:44,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:45,567 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:45,971 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:46,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:47,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:48,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:49,577 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:50,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:50,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:51,582 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:52,584 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:53,586 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:54,589 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:55,591 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:45:56,593 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:57,595 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:45:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:45:58,598 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:45:59,600 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:00,603 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:01,605 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:02,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:03,610 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:04,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:05,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:06,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:07,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:07,619 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:08,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:08,622 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:09,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:10,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:10,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:11,628 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:12,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:12,630 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:13,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:13,633 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:14,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:15,638 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:15,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:16,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:17,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:17,642 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:18,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:18,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:19,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:20,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:20,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:21,653 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:22,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:22,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:23,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:23,658 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:24,661 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:25,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:25,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:26,666 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:27,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:27,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:28,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:28,672 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:29,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:30,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:30,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:31,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:32,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:32,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:33,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:33,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:34,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:35,692 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:35,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:36,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:37,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:37,698 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:38,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:38,700 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:39,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:40,705 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:40,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:41,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:42,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:42,709 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:43,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:43,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:44,714 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:45,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:45,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:46,723 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:47,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:47,726 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:48,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:48,728 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:49,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:50,734 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:50,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:51,736 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:52,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:52,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:53,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:53,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:54,745 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:55,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:55,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:46:56,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:57,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:46:57,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:58,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:46:58,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:46:59,757 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:47:00,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:47:00,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:47:01,761 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:47:02,600 INFO [asf911:46345Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C46345%2C1530516902414]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 at position: -1 2018-07-02 07:47:02,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:47:03,389 INFO [asf911:57468Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C57468%2C1530516898088]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 at position: 934 2018-07-02 07:47:03,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:47:04,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 ====> TEST TIMED OUT. PRINTING THREAD DUMP. <==== Timestamp: 2018-07-02 07:47:05,608 "RS-EventLoopGroup-6-22" daemon prio=10 tid=678 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-4" daemon prio=5 tid=13391 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=54338" daemon prio=5 tid=1152 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "AsyncFSWAL-0" daemon prio=5 tid=3180 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS:5;asf911:40536.replicationSource,1-EventThread" daemon prio=5 tid=3145 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "PEWorker-12" daemon prio=5 tid=1269 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RS-EventLoopGroup-12-1" daemon prio=10 tid=1182 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=57468" daemon prio=5 tid=2708 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@fba2335" daemon prio=5 tid=1053 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-8" daemon prio=10 tid=658 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1781927028_2666 at /127.0.0.1:51708 [Waiting for operation #2315]" daemon prio=5 tid=2420 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-10" daemon prio=10 tid=2856 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "HBase-Metrics2-1" daemon prio=5 tid=404 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=57468" daemon prio=5 tid=2699 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "ReadOnlyZKClient-localhost:59178@0x13c02235" daemon prio=5 tid=3141 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "refreshUsed-/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data6/current/BP-864545819-67.195.81.155-1530516862749" daemon prio=5 tid=1109 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-27" daemon prio=10 tid=1410 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 4 on 45159" daemon prio=5 tid=1063 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-15-15" daemon prio=10 tid=3037 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-16-7" daemon prio=10 tid=3174 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "MemStoreFlusher.1" daemon prio=5 tid=2953 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) at java.lang.Thread.run(Thread.java:748) "ReadOnlyZKClient-localhost:59178@0x6b02adc8" daemon prio=5 tid=3112 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741847_1023, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=2820 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 3 on 45159" daemon prio=5 tid=1062 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "ReadOnlyZKClient-localhost:59178@0x4f12723e" daemon prio=5 tid=1450 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-3" daemon prio=10 tid=593 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS:5;asf911:40536.replicationSource,1-SendThread(localhost:59178)" daemon prio=5 tid=3144 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "RS-EventLoopGroup-5-2" daemon prio=10 tid=1515 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 7 on 45159" daemon prio=5 tid=1066 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RegionServerTracker-0" daemon prio=5 tid=1310 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "Default-IPC-NioEventLoopGroup-8-3" daemon prio=10 tid=1456 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PEWorker-8" daemon prio=5 tid=1265 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "Monitor thread for TaskMonitor" daemon prio=5 tid=506 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 1 on 42386" daemon prio=5 tid=777 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "asf911:40536Replication Statistics #0" daemon prio=5 tid=3139 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40536" daemon prio=5 tid=3110 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-13-20" daemon prio=10 tid=1393 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "222311693@qtp-2011498121-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:51439" daemon prio=5 tid=895 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "IPC Client (717555232) connection to localhost/127.0.0.1:42386 from jenkins.hfs.7" daemon prio=5 tid=2943 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741847_1023, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=2819 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=57468" daemon prio=5 tid=2697 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "master/asf911:0" daemon prio=5 tid=1276 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:1618) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1638) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$400(AssignmentManager.java:111) at org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1580) "RS-EventLoopGroup-13-2" daemon prio=10 tid=1296 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:52587 [Waiting for operation #2408]" daemon prio=5 tid=2465 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "JvmPauseMonitor" daemon prio=5 tid=3122 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:154) at java.lang.Thread.run(Thread.java:748) "RS:4;asf911:46345.replicationSource.shipperasf911.gq1.ygridcore.net%2C46345%2C1530516902414,1" daemon prio=5 tid=3019 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.take(ReplicationSourceWALReader.java:300) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.run(ReplicationSourceShipper.java:101) "M:0;asf911:44014" daemon prio=5 tid=1142 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:56) at org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:679) at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:884) at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:828) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:931) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:586) at java.lang.Thread.run(Thread.java:748) "Default-IPC-NioEventLoopGroup-8-4" daemon prio=10 tid=1484 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "ResponseProcessor for block BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005" daemon prio=5 tid=1283 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847) "ReadOnlyZKClient-localhost:59178@0x247f9686" daemon prio=5 tid=1253 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-16" daemon prio=10 tid=10791 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Time-limited test" daemon prio=5 tid=23 blocked java.lang.Thread.State: BLOCKED at org.junit.runner.notification.SynchronizedRunListener.testFailure(SynchronizedRunListener.java:63) at org.junit.runner.notification.RunNotifier$4.notifyListener(RunNotifier.java:142) at org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) at org.junit.runner.notification.RunNotifier.fireTestFailures(RunNotifier.java:138) at org.junit.runner.notification.RunNotifier.fireTestFailure(RunNotifier.java:132) at org.apache.maven.surefire.common.junit4.Notifier.fireTestFailure(Notifier.java:114) at org.junit.internal.runners.model.EachTestNotifier.addFailure(EachTestNotifier.java:23) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:329) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 9 on 34583" daemon prio=5 tid=885 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3fb13a0b" daemon prio=5 tid=774 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-5" daemon prio=10 tid=1457 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Timer-4" daemon prio=5 tid=762 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "Thread-1561-SendThread(localhost:59178)" daemon prio=5 tid=2918 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "RS-EventLoopGroup-6-25" daemon prio=10 tid=698 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-3" daemon prio=10 tid=591 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-8" daemon prio=10 tid=2821 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "CacheReplicationMonitor(1278582857)" daemon prio=5 tid=790 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) "RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=57468" daemon prio=5 tid=2696 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@606e9ba6" daemon prio=5 tid=795 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:141) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) at java.lang.Thread.run(Thread.java:748) "IPC Parameter Sending Thread #0" daemon prio=5 tid=169 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "NIOServerCxn.Factory:0.0.0.0/0.0.0.0:59178" daemon prio=5 tid=24 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) at java.lang.Thread.run(Thread.java:748) "RS_CLOSE_REGION-regionserver/asf911:0-0" daemon prio=5 tid=10792 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "ProcedureDispatcherTimeoutThread" daemon prio=5 tid=1275 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.DelayQueue.take(DelayQueue.java:211) at org.apache.hadoop.hbase.procedure2.util.DelayedUtil.takeWithoutInterrupt(DelayedUtil.java:78) at org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher$TimeoutExecutorThread.run(RemoteProcedureDispatcher.java:294) "Thread-1561-EventThread" daemon prio=5 tid=2919 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "PEWorker-1" daemon prio=5 tid=1258 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "Default-IPC-NioEventLoopGroup-8-1" daemon prio=10 tid=753 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=54338" daemon prio=5 tid=1157 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "Thread-158-SendThread(localhost:59178)" daemon prio=5 tid=596 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "java.util.concurrent.ThreadPoolExecutor$Worker@55085c2[State = -1, empty queue]" daemon prio=5 tid=1114 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server listener on 34583" daemon prio=5 tid=871 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-5" daemon prio=5 tid=9991 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS_OPEN_META-regionserver/asf911:0-0" daemon prio=5 tid=3026 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "region-location-1" daemon prio=5 tid=748 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "process reaper" daemon prio=10 tid=343 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "asf911:57468Replication Statistics #0" daemon prio=5 tid=2738 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "PEWorker-7" daemon prio=5 tid=1264 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "nioEventLoopGroup-8-1" prio=10 tid=806 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:748) "Time-limited test-SendThread(localhost:59178)" daemon prio=5 tid=1126 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=1331 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-3" daemon prio=5 tid=9865 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-16" daemon prio=10 tid=1378 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=54338" daemon prio=5 tid=1160 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "JvmPauseMonitor" daemon prio=5 tid=2722 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:154) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-11-1" daemon prio=10 tid=1163 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163-EventThread" daemon prio=5 tid=3328 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "IPC Parameter Sending Thread #2" daemon prio=5 tid=609 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2e960924" daemon prio=5 tid=763 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3638) at java.lang.Thread.run(Thread.java:748) "Thread-409-HFileCleaner.small.0-1530516865875" daemon prio=5 tid=1309 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:550) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:232) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:216) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=1330 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "RS-EventLoopGroup-6-12" daemon prio=10 tid=665 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.Chore.2" daemon prio=5 tid=10739 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-4" daemon prio=10 tid=1298 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 7 on 42386" daemon prio=5 tid=783 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=46345" daemon prio=5 tid=2927 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-16-8" daemon prio=10 tid=3177 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-32" daemon prio=10 tid=715 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "AsyncFSWAL-0" daemon prio=5 tid=2827 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44014" daemon prio=5 tid=1139 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server handler 6 on 33404" daemon prio=5 tid=974 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-13-11" daemon prio=10 tid=1369 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=57468" daemon prio=5 tid=2705 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "regionserver/asf911:0.Chore.3" daemon prio=5 tid=3091 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=627 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "JvmPauseMonitor" daemon prio=5 tid=2945 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:154) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-9-4" daemon prio=10 tid=1301 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1781927028_2666 at /127.0.0.1:56286 [Waiting for operation #2384]" daemon prio=5 tid=1646 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-26" daemon prio=10 tid=699 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:51051 [Waiting for operation #2349]" daemon prio=5 tid=1716 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "ProcessThread(sid:0 cport:59178):" daemon prio=5 tid=27 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122) "regionserver/asf911:0.logRoller" daemon prio=5 tid=3126 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:167) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-3" daemon prio=10 tid=3005 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 1 on 33404" daemon prio=5 tid=969 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "asf911:46345Replication Statistics #0" daemon prio=5 tid=2960 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:53235 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741851_1027]" daemon prio=5 tid=3031 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-3" daemon prio=10 tid=1297 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PEWorker-6" daemon prio=5 tid=1263 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RS-EventLoopGroup-13-21" daemon prio=10 tid=1395 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 tid=1057 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=1307 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-5" daemon prio=5 tid=16895 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-1" daemon prio=10 tid=1201 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@3e974605" daemon prio=5 tid=962 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-14" daemon prio=10 tid=689 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-16" daemon prio=10 tid=685 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Block report processor" daemon prio=5 tid=764 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:3854) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:3843) "RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=46345" daemon prio=5 tid=2923 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "DataStreamer for file /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/MasterProcWALs/pv2-00000000000000000001.log block BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005" daemon prio=5 tid=1274 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) "RS-EventLoopGroup-6-13" daemon prio=10 tid=666 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Reference Handler" daemon prio=10 tid=2 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at java.lang.ref.Reference.tryHandlePending(Reference.java:191) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) "RS-EventLoopGroup-9-2" daemon prio=10 tid=1299 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_290127418_23 at /127.0.0.1:50657 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005]" daemon prio=5 tid=1279 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "161283455@qtp-539544235-0" daemon prio=5 tid=984 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "RS-EventLoopGroup-14-2" daemon prio=10 tid=2716 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-9-7" daemon prio=10 tid=2941 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Default-IPC-NioEventLoopGroup-8-2" daemon prio=10 tid=1454 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 4 on 33404" daemon prio=5 tid=972 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "SessionTracker" daemon prio=5 tid=25 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146) "RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=54338" daemon prio=5 tid=1153 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-4" daemon prio=5 tid=9990 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS:5;asf911:40536.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C40536%2C1530516905630,1" daemon prio=5 tid=3182 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.handleEmptyWALEntryBatch(ReplicationSourceWALReader.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:146) "RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=54338" daemon prio=5 tid=1149 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "regionserver/asf911:0.Chore.1" daemon prio=5 tid=2947 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 tid=966 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "OldWALsCleaner-0" daemon prio=5 tid=1304 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:148) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:126) at org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$108/197982536.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-2" daemon prio=10 tid=589 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS:5;asf911:40536-longCompactions-1530516905887" daemon prio=5 tid=3123 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 8 on 45159" daemon prio=5 tid=1067 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server listener on 42386" daemon prio=5 tid=766 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=46345" daemon prio=5 tid=2929 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,38428,1530516865163-SendThread(localhost:59178)" daemon prio=5 tid=3327 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741851_1027, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3033 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-31" daemon prio=10 tid=714 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741851_1027, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3036 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=44014" daemon prio=5 tid=1136 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server idle connection scanner for port 45159" daemon prio=5 tid=1056 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RS:5;asf911:40536" daemon prio=5 tid=3111 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:56) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1011) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:183) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:129) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:164) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=1280 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-10" daemon prio=10 tid=662 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-9-3" daemon prio=10 tid=1300 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-11" daemon prio=10 tid=3028 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=57468" daemon prio=5 tid=2706 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "Time-limited test-EventThread" daemon prio=5 tid=1146 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "Timer for 'DataNode' metrics system" daemon prio=5 tid=19401 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "ReadOnlyZKClient-localhost:59178@0x31331b88" daemon prio=5 tid=2979 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=44014" daemon prio=5 tid=1133 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-3-4" daemon prio=10 tid=594 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DecommissionMonitor-0" daemon prio=5 tid=775 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 6 on 34583" daemon prio=5 tid=882 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-6-21" daemon prio=10 tid=680 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46345" daemon prio=5 tid=2931 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "Time-limited test-SendThread(localhost:59178)" daemon prio=5 tid=1145 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "Thread-1561-SendThread(localhost:59178)" daemon prio=5 tid=2693 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "RS-EventLoopGroup-13-10" daemon prio=10 tid=1370 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-2" daemon prio=10 tid=2940 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46345" daemon prio=5 tid=2922 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=54338" daemon prio=5 tid=1154 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "DataXceiver for client DFSClient_NONMAPREDUCE_-1781927028_2666 at /127.0.0.1:51476 [Waiting for operation #2331]" daemon prio=5 tid=2199 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE" daemon prio=5 tid=1281 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) at java.lang.Thread.run(Thread.java:748) "Idle-Rpc-Conn-Sweeper-pool2-t1" daemon prio=5 tid=548 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Client (717555232) connection to localhost/127.0.0.1:42386 from jenkins.hfs.8" daemon prio=5 tid=3118 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) "Thread-409-SendThread(localhost:59178)" daemon prio=5 tid=1302 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "PEWorker-15" daemon prio=5 tid=1272 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=57468" daemon prio=5 tid=2702 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "1005648588@qtp-549638555-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38794" daemon prio=5 tid=761 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data6)" daemon prio=5 tid=1102 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:622) "RS-EventLoopGroup-4-3" daemon prio=10 tid=744 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-24" daemon prio=10 tid=696 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-4" daemon prio=10 tid=590 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "677087512@qtp-549638555-0" daemon prio=5 tid=760 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "IPC Server handler 9 on 45159" daemon prio=5 tid=1068 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-12-2" daemon prio=10 tid=1400 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-7" daemon prio=10 tid=1362 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.leaseChecker" daemon prio=5 tid=3125 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:95) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.Chore.3" daemon prio=5 tid=10740 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:53220 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741850_1026]" daemon prio=5 tid=3009 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS:4;asf911:46345.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C46345%2C1530516902414,1" daemon prio=5 tid=3020 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.handleEmptyWALEntryBatch(ReplicationSourceWALReader.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:146) "RpcClient-timer-pool1-t1" daemon prio=5 tid=547 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-13" daemon prio=10 tid=3034 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.leaseChecker" daemon prio=5 tid=2725 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:95) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 1 on 45159" daemon prio=5 tid=1060 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 0 on 42386" daemon prio=5 tid=776 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@1490f847" daemon prio=5 tid=788 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4642) at java.lang.Thread.run(Thread.java:748) "PEWorker-16" daemon prio=5 tid=1273 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RS-EventLoopGroup-4-2" daemon prio=10 tid=737 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 6 on 45159" daemon prio=5 tid=1065 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 9 on 42386" daemon prio=5 tid=785 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=57468" daemon prio=5 tid=2707 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=57468" daemon prio=5 tid=2698 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "Timer-6" daemon prio=5 tid=897 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:52415 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741850_1026]" daemon prio=5 tid=3010 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@3ecd5ec6" daemon prio=5 tid=787 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4598) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 1 on 34583" daemon prio=5 tid=877 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "region-location-0" daemon prio=5 tid=747 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-4-1" daemon prio=10 tid=448 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:57716 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741850_1026]" daemon prio=5 tid=3008 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=2727 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "pool-127-thread-1" prio=5 tid=893 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40536" daemon prio=5 tid=3099 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44014" daemon prio=5 tid=1140 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@5d221a84" daemon prio=5 tid=765 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:381) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-23" daemon prio=10 tid=695 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 8 on 42386" daemon prio=5 tid=784 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:57733 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741851_1027]" daemon prio=5 tid=3032 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS_OPEN_REGION-regionserver/asf911:0-0" daemon prio=5 tid=10812 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-2" daemon prio=5 tid=6521 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "SplitLogWorker-asf911:46345" daemon prio=5 tid=2955 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination.taskLoop(ZkSplitLogWorkerCoordination.java:461) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:219) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 7 on 33404" daemon prio=5 tid=975 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=40536" daemon prio=5 tid=3105 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "ReplicationExecutor-0" daemon prio=5 tid=2734 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-30" daemon prio=10 tid=709 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "refreshUsed-/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data4/current/BP-864545819-67.195.81.155-1530516862749" daemon prio=5 tid=1093 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-9" daemon prio=10 tid=3025 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@41b79b5a" daemon prio=5 tid=892 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:141) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 tid=769 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "AsyncFSWAL-0" daemon prio=5 tid=3039 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=44014" daemon prio=5 tid=1138 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server handler 3 on 42386" daemon prio=5 tid=779 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "M:1;asf911:54338" daemon prio=5 tid=1161 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:56) at org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:679) at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:884) at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:828) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:931) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:586) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.PeerCache@152357e9" daemon prio=5 tid=514 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:255) at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44014" daemon prio=5 tid=1128 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "ReadOnlyZKClient-localhost:59178@0x6bf6815f" daemon prio=5 tid=3208 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40536" daemon prio=5 tid=3098 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-6-15" daemon prio=10 tid=686 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.Chore.2" daemon prio=5 tid=7347 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS_OPEN_PRIORITY_REGION-regionserver/asf911:0-0" daemon prio=5 tid=3067 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS:3;asf911:57468" daemon prio=5 tid=2709 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:56) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1011) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:183) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:129) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:164) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741852_1028, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3173 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112-SendThread(localhost:59178)" daemon prio=5 tid=3212 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "Finalizer" daemon prio=8 tid=3 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216) "RS:3;asf911:57468.replicationSource,1-EventThread" daemon prio=5 tid=2762 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RS:4;asf911:46345" daemon prio=5 tid=2934 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92) at org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:56) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1011) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:183) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:129) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:164) at java.lang.Thread.run(Thread.java:748) "ReadOnlyZKClient-localhost:59178@0x21fbb142" daemon prio=5 tid=2712 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "MemStoreFlusher.0" daemon prio=5 tid=2728 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1418152690_2666 at /127.0.0.1:52171 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741847_1023]" daemon prio=5 tid=2815 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 2 on 34583" daemon prio=5 tid=878 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 2 on 42386" daemon prio=5 tid=778 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "DataXceiver for client DFSClient_NONMAPREDUCE_-1418152690_2666 at /127.0.0.1:51790 [Waiting for operation #2449]" daemon prio=5 tid=1647 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-18" daemon prio=10 tid=687 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741847_1023, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=2823 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 0 on 34583" daemon prio=5 tid=876 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "Timer-5" daemon prio=5 tid=805 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RS-EventLoopGroup-15-7" daemon prio=10 tid=3015 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=46345" daemon prio=5 tid=2925 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-14-14" daemon prio=10 tid=2863 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data4)" daemon prio=5 tid=1086 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:622) "RS-EventLoopGroup-9-1" daemon prio=10 tid=1125 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741852_1028, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3176 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=57468" daemon prio=5 tid=2701 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-6-6" daemon prio=10 tid=656 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Thread-1561-SendThread(localhost:59178)" daemon prio=5 tid=3095 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=40536" daemon prio=5 tid=3103 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-6-28" daemon prio=10 tid=707 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS:4;asf911:46345-longCompactions-1530516902532" daemon prio=5 tid=2946 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-1" daemon prio=10 tid=467 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=40536" daemon prio=5 tid=3107 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "SplitLogWorker-asf911:57468" daemon prio=5 tid=2732 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination.taskLoop(ZkSplitLogWorkerCoordination.java:461) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:219) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=54338" daemon prio=5 tid=1151 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=44014" daemon prio=5 tid=1135 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40536" daemon prio=5 tid=3097 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS:3;asf911:57468-longCompactions-1530516898318" daemon prio=5 tid=2723 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1781927028_2666 at /127.0.0.1:51875 [Waiting for operation #2444]" daemon prio=5 tid=1738 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "java.util.concurrent.ThreadPoolExecutor$Worker@28f78b6c[State = -1, empty queue]" daemon prio=5 tid=1084 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-17" daemon prio=10 tid=10770 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "main" prio=5 tid=1 runnable java.lang.Thread.State: RUNNABLE at java.lang.Thread.dumpThreads(Native Method) at java.lang.Thread.getAllStackTraces(Thread.java:1610) at org.apache.hadoop.hbase.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:88) at org.apache.hadoop.hbase.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:74) at org.apache.hadoop.hbase.TimedOutTestsListener.testFailure(TimedOutTestsListener.java:62) at org.junit.runner.notification.SynchronizedRunListener.testFailure(SynchronizedRunListener.java:63) at org.junit.runner.notification.RunNotifier$4.notifyListener(RunNotifier.java:142) at org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) at org.junit.runner.notification.RunNotifier.fireTestFailures(RunNotifier.java:138) at org.junit.runner.notification.RunNotifier.fireTestFailure(RunNotifier.java:132) at org.apache.maven.surefire.common.junit4.Notifier.fireTestFailure(Notifier.java:114) at org.junit.internal.runners.model.EachTestNotifier.addFailure(EachTestNotifier.java:23) at org.junit.internal.runners.model.EachTestNotifier.addMultipleFailureException(EachTestNotifier.java:29) at org.junit.internal.runners.model.EachTestNotifier.addFailure(EachTestNotifier.java:21) at org.junit.runners.ParentRunner.run(ParentRunner.java:369) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-6" daemon prio=5 tid=13515 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-5" daemon prio=10 tid=2816 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data1)" daemon prio=5 tid=1069 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:622) "regionserver/asf911:0.Chore.3" daemon prio=5 tid=7348 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46345" daemon prio=5 tid=2932 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@30980915" daemon prio=5 tid=789 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:4725) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-2" daemon prio=10 tid=592 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "LeaseRenewer:jenkins.hfs.7@localhost:42386" daemon prio=5 tid=3004 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) at java.lang.Thread.run(Thread.java:748) "MemStoreFlusher.1" daemon prio=5 tid=2730 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-1" daemon prio=10 tid=2692 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-16" daemon prio=10 tid=3057 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1781927028_2666 at /127.0.0.1:52644 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741852_1028]" daemon prio=5 tid=3170 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "MemStoreFlusher.0" daemon prio=5 tid=3128 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-6" daemon prio=10 tid=3016 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-6" daemon prio=5 tid=20410 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "Thread-1561-EventThread" daemon prio=5 tid=2694 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RS-EventLoopGroup-13-25" daemon prio=10 tid=1403 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Parameter Sending Thread #7" daemon prio=5 tid=23608 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Client (717555232) connection to localhost/127.0.0.1:42386 from jenkins" daemon prio=5 tid=886 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) "RS-EventLoopGroup-14-3" daemon prio=10 tid=2813 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "region-location-0" daemon prio=5 tid=1448 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=44014" daemon prio=5 tid=1132 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-13-14" daemon prio=10 tid=1375 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "region-location-1" daemon prio=5 tid=1449 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=46345" daemon prio=5 tid=2924 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-14-4" daemon prio=10 tid=2814 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 5 on 45159" daemon prio=5 tid=1064 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 2 on 33404" daemon prio=5 tid=970 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-15-5" daemon prio=10 tid=3007 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "MemStoreFlusher.0" daemon prio=5 tid=2951 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-9" daemon prio=5 tid=17019 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 0 on 33404" daemon prio=5 tid=968 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-16-1" daemon prio=10 tid=3094 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-16-2" daemon prio=10 tid=3116 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-5" daemon prio=10 tid=1357 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Timer-7" daemon prio=5 tid=988 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RS-EventLoopGroup-13-23" daemon prio=10 tid=1399 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 3 on 33404" daemon prio=5 tid=971 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-14-9" daemon prio=10 tid=2855 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=54338" daemon prio=5 tid=1147 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "ReplicationExecutor-0" daemon prio=5 tid=3134 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-10" daemon prio=10 tid=3027 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@4cdd0561" daemon prio=5 tid=786 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:431) at java.lang.Thread.run(Thread.java:748) "threadDeathWatcher-7-1" daemon prio=1 tid=595 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hbase.thirdparty.io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_290127418_23 at /127.0.0.1:51463 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005]" daemon prio=5 tid=1277 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=54338" daemon prio=5 tid=1150 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS:5;asf911:40536.replicationSource.shipperasf911.gq1.ygridcore.net%2C40536%2C1530516905630,1" daemon prio=5 tid=3181 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.take(ReplicationSourceWALReader.java:300) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.run(ReplicationSourceShipper.java:101) "RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=54338" daemon prio=5 tid=1155 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "Thread-1561-EventThread" daemon prio=5 tid=3096 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RS-EventLoopGroup-12-4" daemon prio=10 tid=1455 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1418152690_2666 at /127.0.0.1:56634 [Waiting for operation #2385]" daemon prio=5 tid=2039 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "Thread-158-MemStoreChunkPool Statistics" daemon prio=5 tid=510 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "nioEventLoopGroup-10-1" prio=10 tid=898 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=54338" daemon prio=5 tid=1158 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-14-13" daemon prio=10 tid=2862 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=40536" daemon prio=5 tid=3106 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-13-17" daemon prio=10 tid=1385 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-7" daemon prio=10 tid=2824 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.logRoller" daemon prio=5 tid=2949 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:167) at java.lang.Thread.run(Thread.java:748) "RS:3;asf911:57468.replicationSource.shipperasf911.gq1.ygridcore.net%2C57468%2C1530516898088,1" daemon prio=5 tid=2828 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.take(ReplicationSourceWALReader.java:300) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.run(ReplicationSourceShipper.java:101) "Thread-409-EventThread" daemon prio=5 tid=1303 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=625 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "IPC Server handler 3 on 34583" daemon prio=5 tid=879 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=40536" daemon prio=5 tid=3100 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:52426 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741851_1027]" daemon prio=5 tid=3030 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-8" daemon prio=10 tid=1367 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-190551301_2666 at /127.0.0.1:56855 [Waiting for operation #2369]" daemon prio=5 tid=2268 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readShort(DataInputStream.java:312) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-32" daemon prio=10 tid=1506 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-4" daemon prio=10 tid=3006 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741851_1027, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3035 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-12" daemon prio=10 tid=1371 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Socket Reader #1 for port 33404" daemon prio=5 tid=964 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "pool-129-thread-1" prio=5 tid=983 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "Socket Reader #1 for port 45159" daemon prio=5 tid=1055 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "pool-125-thread-1" prio=5 tid=796 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-15" daemon prio=10 tid=1374 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-26" daemon prio=10 tid=1404 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-29" daemon prio=10 tid=1413 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Thread-158-EventThread" daemon prio=5 tid=597 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=54338" daemon prio=5 tid=1159 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "ReadOnlyZKClient-localhost:59178@0x7d5779ca" daemon prio=5 tid=2758 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "Time-limited test-EventThread" daemon prio=5 tid=1127 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RS-EventLoopGroup-6-17" daemon prio=10 tid=684 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "java.util.concurrent.ThreadPoolExecutor$Worker@3b87d9a7[State = -1, empty queue]" daemon prio=5 tid=1098 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 7 on 34583" daemon prio=5 tid=883 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "refreshUsed-/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data2/current/BP-864545819-67.195.81.155-1530516862749" daemon prio=5 tid=1077 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@4f604058" daemon prio=5 tid=1072 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3071) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-2" daemon prio=5 tid=20631 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46345" daemon prio=5 tid=2921 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=40536" daemon prio=5 tid=3102 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-13-31" daemon prio=10 tid=1505 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "refreshUsed-/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data1/current/BP-864545819-67.195.81.155-1530516862749" daemon prio=5 tid=1078 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-12" daemon prio=10 tid=3029 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "LeaseRenewer:jenkins.hfs.6@localhost:42386" daemon prio=5 tid=2811 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-6" daemon prio=10 tid=1358 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "374612204@qtp-22654378-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46194" daemon prio=5 tid=798 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "1355661940@qtp-539544235-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43026" daemon prio=5 tid=985 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "RS-EventLoopGroup-16-4" daemon prio=10 tid=3167 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=54338" daemon prio=5 tid=1148 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@4e37bc22" daemon prio=5 tid=770 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-19" daemon prio=10 tid=1391 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 8 on 34583" daemon prio=5 tid=884 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 0 on 45159" daemon prio=5 tid=1059 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-16-9" daemon prio=10 tid=10811 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS:3;asf911:57468.replicationSource,1-SendThread(localhost:59178)" daemon prio=5 tid=2761 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=1327 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "RS-EventLoopGroup-13-18" daemon prio=10 tid=1387 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server idle connection scanner for port 33404" daemon prio=5 tid=965 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "PEWorker-14" daemon prio=5 tid=1271 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "surefire-forkedjvm-command-thread" daemon prio=5 tid=18 runnable java.lang.Thread.State: RUNNABLE at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:255) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.maven.surefire.booter.MasterProcessCommand.decode(MasterProcessCommand.java:115) at org.apache.maven.surefire.booter.CommandReader$CommandRunnable.run(CommandReader.java:391) at java.lang.Thread.run(Thread.java:748) "SplitLogWorker-asf911:40536" daemon prio=5 tid=3132 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination.taskLoop(ZkSplitLogWorkerCoordination.java:461) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:219) at java.lang.Thread.run(Thread.java:748) "Socket Reader #1 for port 34583" daemon prio=5 tid=872 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "RS-EventLoopGroup-6-5" daemon prio=10 tid=655 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=44014" daemon prio=5 tid=1134 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server idle connection scanner for port 34583" daemon prio=5 tid=873 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RS-EventLoopGroup-6-27" daemon prio=10 tid=701 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-22" daemon prio=10 tid=1392 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=46345" daemon prio=5 tid=2930 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-1" daemon prio=5 tid=17118 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "PEWorker-11" daemon prio=5 tid=1268 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46345" daemon prio=5 tid=2933 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data5)" daemon prio=5 tid=1101 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:622) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741850_1026, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3012 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-16-3" daemon prio=10 tid=3166 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-15" daemon prio=10 tid=3065 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-14-6" daemon prio=10 tid=2822 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PEWorker-10" daemon prio=5 tid=1267 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RS-EventLoopGroup-9-6" daemon prio=10 tid=2717 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Socket Reader #1 for port 42386" daemon prio=5 tid=767 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "master/asf911:0.Chore.1" daemon prio=5 tid=1306 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-1-1" daemon prio=10 tid=405 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-14" daemon prio=10 tid=3038 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=2950 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "WALProcedureStoreSyncThread" daemon prio=5 tid=1256 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:777) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:70) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:272) "RS-EventLoopGroup-13-28" daemon prio=10 tid=1411 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1418152690_2666 at /127.0.0.1:52980 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741847_1023]" daemon prio=5 tid=2817 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-0" daemon prio=5 tid=6519 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "refreshUsed-/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data3/current/BP-864545819-67.195.81.155-1530516862749" daemon prio=5 tid=1092 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 6 on 42386" daemon prio=5 tid=782 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=601 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46345" daemon prio=5 tid=2920 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "refreshUsed-/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data5/current/BP-864545819-67.195.81.155-1530516862749" daemon prio=5 tid=1108 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44014" daemon prio=5 tid=1141 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data3)" daemon prio=5 tid=1085 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:622) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741850_1026, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3013 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-9" daemon prio=10 tid=659 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "ReplicationExecutor-0" daemon prio=5 tid=2958 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-9-8" daemon prio=10 tid=3117 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PEWorker-2" daemon prio=5 tid=1259 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RS-EventLoopGroup-6-20" daemon prio=10 tid=682 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-3" daemon prio=5 tid=9989 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-8" daemon prio=10 tid=3014 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-13" daemon prio=10 tid=1372 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "AsyncFSWAL-0" daemon prio=5 tid=3018 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "PEWorker-4" daemon prio=5 tid=1261 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "Thread-158-MemStoreChunkPool Statistics" daemon prio=5 tid=512 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1781927028_2666 at /127.0.0.1:53453 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741852_1028]" daemon prio=5 tid=3169 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-12-3" daemon prio=10 tid=1419 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data2)" daemon prio=5 tid=1070 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:622) "pool-123-thread-1" prio=5 tid=759 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-8" daemon prio=5 tid=13517 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-1" daemon prio=5 tid=6520 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "PEWorker-5" daemon prio=5 tid=1262 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "master/asf911:0.splitLogManager..Chore.1" daemon prio=5 tid=1252 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=46345" daemon prio=5 tid=2926 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "DataXceiver for client DFSClient_NONMAPREDUCE_-1418152690_2666 at /127.0.0.1:57478 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741847_1023]" daemon prio=5 tid=2818 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-15-18" daemon prio=10 tid=19660 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@3f85d85f" daemon prio=5 tid=1103 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3071) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44014" daemon prio=5 tid=1129 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server listener on 45159" daemon prio=5 tid=1054 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=626 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=54338" daemon prio=5 tid=1156 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server idle connection scanner for port 42386" daemon prio=5 tid=768 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-2" daemon prio=5 tid=9864 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 5 on 33404" daemon prio=5 tid=973 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-9-5" daemon prio=10 tid=1485 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-9" daemon prio=10 tid=1368 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.Chore.2" daemon prio=5 tid=3090 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-1" daemon prio=5 tid=6398 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "ReadOnlyZKClient-localhost:59178@0x39c21844" daemon prio=5 tid=2936 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "IPC Client (717555232) connection to localhost/127.0.0.1:42386 from jenkins.hfs.6" daemon prio=5 tid=2721 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) "RS-EventLoopGroup-6-11" daemon prio=10 tid=664 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "LeaseRenewer:jenkins.hfs.8@localhost:42386" daemon prio=5 tid=3165 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-13-30" daemon prio=10 tid=1418 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@7b0dc7e" daemon prio=5 tid=870 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=44014" daemon prio=5 tid=1137 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-13-24" daemon prio=10 tid=1402 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "2139250692@qtp-2011498121-0" daemon prio=5 tid=894 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "regionserver/asf911:0.leaseChecker" daemon prio=5 tid=2948 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:95) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 2 on 45159" daemon prio=5 tid=1061 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "ReadOnlyZKClient-localhost:59178@0x4bc7bd55" daemon prio=5 tid=3324 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$67/295545861.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "ReplicationExecutor-0.replicationSource,1-asf911.gq1.ygridcore.net,33727,1530516865112-EventThread" daemon prio=5 tid=3213 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RS:4;asf911:46345.replicationSource,1-EventThread" daemon prio=5 tid=2983 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40536" daemon prio=5 tid=3108 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=57468" daemon prio=5 tid=2703 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-11-2" daemon prio=10 tid=1602 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-1" daemon prio=10 tid=429 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "MemStoreFlusher.1" daemon prio=5 tid=3130 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) at java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) at java.lang.Thread.run(Thread.java:748) "ProcExecTimeout" daemon prio=5 tid=1257 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.DelayQueue.take(DelayQueue.java:223) at org.apache.hadoop.hbase.procedure2.util.DelayedUtil.takeWithoutInterrupt(DelayedUtil.java:78) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:56) "nioEventLoopGroup-12-1" prio=10 tid=989 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=46345" daemon prio=5 tid=2928 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS:3;asf911:57468.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C57468%2C1530516898088,1" daemon prio=5 tid=2829 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.handleEmptyWALEntryBatch(ReplicationSourceWALReader.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:146) "RS:4;asf911:46345.replicationSource,1-SendThread(localhost:59178)" daemon prio=5 tid=2982 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "regionserver/asf911:0.procedureResultReporter" daemon prio=5 tid=3127 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) "RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40536" daemon prio=5 tid=3109 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@5e71ca18" daemon prio=5 tid=1087 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3071) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-29" daemon prio=10 tid=708 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE" daemon prio=5 tid=1282 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-7" daemon prio=5 tid=13516 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=57468" daemon prio=5 tid=2704 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-15-1" daemon prio=10 tid=2917 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Thread-410" daemon prio=5 tid=1220 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.master.ActiveMasterManager.blockUntilBecomingActiveMaster(ActiveMasterManager.java:227) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2127) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:572) at org.apache.hadoop.hbase.master.HMaster$$Lambda$34/665901971.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "OldWALsCleaner-1" daemon prio=5 tid=1305 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:148) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:126) at org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$108/197982536.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=40536" daemon prio=5 tid=3101 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44014" daemon prio=5 tid=1130 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "PEWorker-13" daemon prio=5 tid=1270 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RS-EventLoopGroup-10-1" daemon prio=10 tid=1144 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:42386" daemon prio=5 tid=875 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) at java.lang.Thread.run(Thread.java:748) "PEWorker-3" daemon prio=5 tid=1260 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) at org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1750) "RS-EventLoopGroup-6-7" daemon prio=10 tid=657 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=57468" daemon prio=5 tid=2700 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "Thread-409-HFileCleaner.large.0-1530516865875" daemon prio=5 tid=1308 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:232) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:201) "RS-EventLoopGroup-14-12" daemon prio=10 tid=2864 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_-1781927028_2666 at /127.0.0.1:57948 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741852_1028]" daemon prio=5 tid=3171 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 9 on 33404" daemon prio=5 tid=977 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "Signal Dispatcher" daemon prio=9 tid=4 runnable java.lang.Thread.State: RUNNABLE "IPC Server handler 5 on 34583" daemon prio=5 tid=881 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-16-5" daemon prio=10 tid=3168 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.logRoller" daemon prio=5 tid=2726 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:167) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014" daemon prio=5 tid=1131 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server Responder" daemon prio=5 tid=874 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "RS-EventLoopGroup-6-1" daemon prio=10 tid=486 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:42386" daemon prio=5 tid=1058 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-6-19" daemon prio=10 tid=683 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS_OPEN_REGION-regionserver/asf911:0-0" daemon prio=5 tid=3066 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "447484279@qtp-22654378-0" daemon prio=5 tid=797 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@763e39ce" daemon prio=5 tid=982 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:141) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) at java.lang.Thread.run(Thread.java:748) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-7" daemon prio=5 tid=23916 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_290127418_23 at /127.0.0.1:55961 [Receiving block BP-864545819-67.195.81.155-1530516862749:blk_1073741829_1005]" daemon prio=5 tid=1278 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 4 on 34583" daemon prio=5 tid=880 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-0" daemon prio=5 tid=13617 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=40536" daemon prio=5 tid=3104 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741850_1026, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3011 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RS:4;asf911:46345-shortCompactions-1530517222761" daemon prio=5 tid=12348 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:550) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-16-6" daemon prio=10 tid=3175 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "SyncThread:0" daemon prio=5 tid=26 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127) "RS_COMPACTED_FILES_DISCHARGER-regionserver/asf911:0-0" daemon prio=5 tid=6397 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 5 on 42386" daemon prio=5 tid=781 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 8 on 33404" daemon prio=5 tid=976 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=29 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3063) at java.lang.Thread.run(Thread.java:748) "LeaseRenewer:jenkins@localhost:42386" daemon prio=5 tid=1116 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 4 on 42386" daemon prio=5 tid=780 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "PacketResponder: BP-864545819-67.195.81.155-1530516862749:blk_1073741852_1028, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=3172 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=57468" daemon prio=5 tid=2695 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-14-11" daemon prio=10 tid=2857 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "regionserver/asf911:0.Chore.1" daemon prio=5 tid=2724 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "surefire-forkedjvm-ping-30s" daemon prio=5 tid=19 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server listener on 33404" daemon prio=5 tid=963 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "regionserver/asf911:0.Chore.1" daemon prio=5 tid=3124 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/HBase-Flaky-Tests/hbase-server/target/test-data/9c4c5079-2309-3a9a-21fe-15d49a9ff3d1/cluster_c3725aa3-cf5b-ba90-6b0b-6d702105c688/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:42386" daemon prio=5 tid=967 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) at java.lang.Thread.run(Thread.java:748) 2018-07-02 07:47:05,755 INFO [Time-limited test] hbase.ResourceChecker(172): after: replication.TestSyncReplicationStandbyKillRS#testStandbyKillRegionServer Thread=548 (was 859), OpenFileDescriptor=2473 (was 3156), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=92 (was 603), ProcessCount=265 (was 271), AvailableMemoryMB=14656 (was 12980) - AvailableMemoryMB LEAK? - 2018-07-02 07:47:05,755 WARN [Time-limited test] hbase.ResourceChecker(135): Thread=548 is superior to 500 2018-07-02 07:47:05,755 WARN [Time-limited test] hbase.ResourceChecker(135): OpenFileDescriptor=2473 is superior to 1024 2018-07-02 07:47:05,756 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:47:05,758 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3561): Client=jenkins//67.195.81.155 list replication peers, regex=1 2018-07-02 07:47:05,759 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.HMaster(3574): Client=jenkins//67.195.81.155 transit current cluster state to DOWNGRADE_ACTIVE in a synchronous replication peer id=1 2018-07-02 07:47:05,788 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:05,788 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:05,788 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:05,788 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:05,790 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:05,790 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:05,790 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:05,790 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:05,790 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:05,790 INFO [Thread-4] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,46345,1530516902414' ***** 2018-07-02 07:47:05,790 INFO [Thread-4] regionserver.HRegionServer(2168): STOPPED: Shutdown hook 2018-07-02 07:47:05,791 INFO [RS:4;asf911:46345] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:47:05,791 INFO [RS:4;asf911:46345] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:47:05,791 INFO [SplitLogWorker-asf911:46345] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:47:05,791 INFO [SplitLogWorker-asf911:46345] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,46345,1530516902414 exiting 2018-07-02 07:47:05,792 INFO [RS:4;asf911:46345] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:47:05,792 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:47:05,792 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:47:05,792 INFO [RS:4;asf911:46345] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:47:05,792 INFO [RS:4;asf911:46345] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:47:05,793 DEBUG [RS:4;asf911:46345] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:47:05,793 INFO [RS:4;asf911:46345] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x39c21844 to localhost:59178 2018-07-02 07:47:05,793 DEBUG [RS:4;asf911:46345] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:05,794 INFO [RS:4;asf911:46345] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:47:05,794 INFO [RS:4;asf911:46345] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:47:05,794 INFO [RS:4;asf911:46345] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:47:05,795 INFO [RS:4;asf911:46345] regionserver.HRegionServer(1399): Waiting on 1 regions to close 2018-07-02 07:47:05,795 DEBUG [RS:4;asf911:46345] regionserver.HRegionServer(1403): Online Regions={1588230740=hbase:meta,,1.1588230740} 2018-07-02 07:47:05,795 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing 1588230740, disabling compactions & flushes 2018-07-02 07:47:05,795 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region hbase:meta,,1.1588230740 2018-07-02 07:47:05,803 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:47:05,814 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/meta/1588230740/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=25 2018-07-02 07:47:05,816 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-07-02 07:47:05,817 INFO [RS_CLOSE_META-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed hbase:meta,,1.1588230740 2018-07-02 07:47:05,818 DEBUG [RS_CLOSE_META-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed hbase:meta,,1.1588230740 2018-07-02 07:47:05,850 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:47:05,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] procedure2.ProcedureExecutor(887): Stored pid=40, state=RUNNABLE:PRE_PEER_SYNC_REPLICATION_STATE_TRANSITION; org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure 2018-07-02 07:47:05,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=40 2018-07-02 07:47:05,970 INFO [asf911:40536Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(250): Normal source for cluster 1: Total replicated edits: 0, current progress: walGroup [asf911.gq1.ygridcore.net%2C40536%2C1530516905630]: currently replicating from: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 at position: 346 2018-07-02 07:47:05,995 INFO [RS:4;asf911:46345] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,46345,1530516902414; all regions closed. 2018-07-02 07:47:05,999 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:47:06,000 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:47:06,000 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW]]} size 0 2018-07-02 07:47:06,004 DEBUG [RS:4;asf911:46345] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:47:06,004 INFO [RS:4;asf911:46345] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C46345%2C1530516902414.meta:.meta(num 1530516903844) 2018-07-02 07:47:06,006 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:47:06,007 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:47:06,007 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:47:06,010 DEBUG [RS:4;asf911:46345] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:47:06,010 INFO [RS:4;asf911:46345] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C46345%2C1530516902414:(num 1530516903614) 2018-07-02 07:47:06,010 DEBUG [RS:4;asf911:46345] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:06,010 INFO [RS:4;asf911:46345] regionserver.Leases(149): Closed leases 2018-07-02 07:47:06,011 INFO [RS:4;asf911:46345] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,46345,1530516902414 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:47:06,013 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:47:06,014 INFO [RS:4;asf911:46345] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:47:06,015 INFO [RS:4;asf911:46345.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C46345%2C1530516902414,1] regionserver.WALEntryStream(321): Log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 was moved to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C46345%2C1530516902414.1530516903614 2018-07-02 07:47:06,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=40 2018-07-02 07:47:06,132 INFO [RS:4;asf911:46345] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x31331b88 to localhost:59178 2018-07-02 07:47:06,133 DEBUG [RS:4;asf911:46345] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:06,134 INFO [RS:4;asf911:46345] regionserver.ReplicationSource(527): ReplicationSourceWorker RS:4;asf911:46345.replicationSource.shipperasf911.gq1.ygridcore.net%2C46345%2C1530516902414,1 terminated 2018-07-02 07:47:06,135 INFO [RS:4;asf911:46345] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:46345 2018-07-02 07:47:06,149 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:47:06,149 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:47:06,149 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:06,149 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:46345-0x16459e9b4500039, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:06,149 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 2018-07-02 07:47:06,157 INFO [RS:4;asf911:46345] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,46345,1530516902414; zookeeper connection closed. 2018-07-02 07:47:06,157 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,46345,1530516902414] 2018-07-02 07:47:06,158 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@20aa3352] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@20aa3352 2018-07-02 07:47:06,158 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:47:06,158 INFO [RegionServerTracker-0] master.ServerManager(604): Processing expiration of asf911.gq1.ygridcore.net,46345,1530516902414 on asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:47:06,158 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:06,158 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:47:06,158 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:06,159 INFO [Thread-1561-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 znode expired, triggering replicatorRemoved event 2018-07-02 07:47:06,158 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:06,159 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:06,159 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:06,159 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:06,159 INFO [Thread-1561-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,46345,1530516902414 znode expired, triggering replicatorRemoved event 2018-07-02 07:47:06,159 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:06,159 INFO [Thread-4] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,57468,1530516898088' ***** 2018-07-02 07:47:06,159 INFO [Thread-4] regionserver.HRegionServer(2168): STOPPED: Shutdown hook 2018-07-02 07:47:06,160 INFO [RS:3;asf911:57468] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:47:06,160 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:47:06,160 INFO [SplitLogWorker-asf911:57468] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:47:06,160 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:06,160 INFO [SplitLogWorker-asf911:57468] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,57468,1530516898088 exiting 2018-07-02 07:47:06,160 INFO [RS:3;asf911:57468] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:47:06,161 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:06,161 INFO [RS:3;asf911:57468] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:47:06,161 INFO [RS:3;asf911:57468] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:47:06,161 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:47:06,162 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:47:06,162 INFO [RS:3;asf911:57468] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:47:06,163 DEBUG [RS:3;asf911:57468] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:47:06,164 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1527): Closing d1a74048f8e137b8647beefb747aafba, disabling compactions & flushes 2018-07-02 07:47:06,164 INFO [RS:3;asf911:57468] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x21fbb142 to localhost:59178 2018-07-02 07:47:06,164 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1567): Updates disabled for region hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:47:06,164 DEBUG [RS:3;asf911:57468] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:06,165 INFO [RS:3;asf911:57468] regionserver.HRegionServer(1399): Waiting on 1 regions to close 2018-07-02 07:47:06,165 DEBUG [RS:3;asf911:57468] regionserver.HRegionServer(1403): Online Regions={d1a74048f8e137b8647beefb747aafba=hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba.} 2018-07-02 07:47:06,172 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/hbase/namespace/d1a74048f8e137b8647beefb747aafba/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2018-07-02 07:47:06,175 INFO [RS_CLOSE_REGION-regionserver/asf911:0-1] regionserver.HRegion(1681): Closed hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:47:06,175 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-1] handler.CloseRegionHandler(124): Closed hbase:namespace,,1530516868937.d1a74048f8e137b8647beefb747aafba. 2018-07-02 07:47:06,204 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:47:06,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=40 2018-07-02 07:47:06,252 INFO [regionserver/asf911:0.Chore.2] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:47:06,329 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(887): Stored pid=41, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,46345,1530516902414, splitWal=true, meta=true 2018-07-02 07:47:06,329 DEBUG [RegionServerTracker-0] assignment.AssignmentManager(1321): Added=asf911.gq1.ygridcore.net,46345,1530516902414 to dead servers, submitted shutdown handler to be executed meta=true 2018-07-02 07:47:06,330 INFO [PEWorker-4] procedure.ServerCrashProcedure(118): Start pid=41, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,46345,1530516902414, splitWal=true, meta=true 2018-07-02 07:47:06,365 INFO [RS:3;asf911:57468] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,57468,1530516898088; all regions closed. 2018-07-02 07:47:06,368 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:47:06,369 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:47:06,369 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5924c3e7-0126-4318-ab71-97788504e4c7:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-c02e3dde-4ee5-4268-849e-c97455f318a6:NORMAL:127.0.0.1:38320|RBW], ReplicaUC[[DISK]DS-38565b32-54b2-419a-97c3-f65c173a0df3:NORMAL:127.0.0.1:51748|RBW]]} size 0 2018-07-02 07:47:06,372 DEBUG [RS:3;asf911:57468] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:47:06,372 INFO [RS:3;asf911:57468] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C57468%2C1530516898088:(num 1530516899420) 2018-07-02 07:47:06,372 DEBUG [RS:3;asf911:57468] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:06,372 INFO [RS:3;asf911:57468] regionserver.Leases(149): Closed leases 2018-07-02 07:47:06,373 INFO [RS:3;asf911:57468] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,57468,1530516898088 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:47:06,373 INFO [RS:3;asf911:57468] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:47:06,373 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:47:06,373 INFO [RS:3;asf911:57468] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:47:06,373 INFO [RS:3;asf911:57468] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:47:06,374 INFO [RS:3;asf911:57468] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:47:06,395 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(229): Splitting meta WALs pid=41, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,46345,1530516902414, splitWal=true, meta=true 2018-07-02 07:47:06,429 INFO [RS:3;asf911:57468.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C57468%2C1530516898088,1] regionserver.WALEntryStream(321): Log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 was moved to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C57468%2C1530516898088.1530516899420 2018-07-02 07:47:06,490 INFO [RS:3;asf911:57468] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x7d5779ca to localhost:59178 2018-07-02 07:47:06,491 DEBUG [RS:3;asf911:57468] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:06,492 INFO [RS:3;asf911:57468] regionserver.ReplicationSource(527): ReplicationSourceWorker RS:3;asf911:57468.replicationSource.shipperasf911.gq1.ygridcore.net%2C57468%2C1530516898088,1 terminated 2018-07-02 07:47:06,493 INFO [RS:3;asf911:57468] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:57468 2018-07-02 07:47:06,501 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:47:06,501 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 2018-07-02 07:47:06,501 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:06,501 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:57468-0x16459e9b4500035, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:06,510 INFO [RS:3;asf911:57468] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,57468,1530516898088; zookeeper connection closed. 2018-07-02 07:47:06,510 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,57468,1530516898088] 2018-07-02 07:47:06,510 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:06,510 INFO [Thread-1561-EventThread] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(122): /cluster2/rs/asf911.gq1.ygridcore.net,57468,1530516898088 znode expired, triggering replicatorRemoved event 2018-07-02 07:47:06,510 INFO [RegionServerTracker-0] master.ServerManager(604): Processing expiration of asf911.gq1.ygridcore.net,57468,1530516898088 on asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:47:06,510 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:06,510 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@62d7907a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@62d7907a 2018-07-02 07:47:06,511 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:06,511 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:06,511 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-07-02 07:47:06,511 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@2243f4cd 2018-07-02 07:47:06,511 INFO [Thread-4] regionserver.HRegionServer(2154): ***** STOPPING region server 'asf911.gq1.ygridcore.net,40536,1530516905630' ***** 2018-07-02 07:47:06,511 INFO [Thread-4] regionserver.HRegionServer(2168): STOPPED: Shutdown hook 2018-07-02 07:47:06,511 DEBUG [Thread-1561-EventThread] zookeeper.ZKUtil(355): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Set watcher on existing znode=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:06,511 INFO [RS:5;asf911:40536] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-07-02 07:47:06,511 INFO [RS:5;asf911:40536] regionserver.HeapMemoryManager(221): Stopping 2018-07-02 07:47:06,511 INFO [SplitLogWorker-asf911:40536] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-07-02 07:47:06,512 INFO [SplitLogWorker-asf911:40536] regionserver.SplitLogWorker(232): SplitLogWorker asf911.gq1.ygridcore.net,40536,1530516905630 exiting 2018-07-02 07:47:06,512 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-07-02 07:47:06,512 INFO [RS:5;asf911:40536] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-07-02 07:47:06,512 INFO [RS:5;asf911:40536] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-07-02 07:47:06,512 INFO [RS:5;asf911:40536] regionserver.HRegionServer(1069): stopping server asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:06,514 DEBUG [RS:5;asf911:40536] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-07-02 07:47:06,513 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-07-02 07:47:06,514 INFO [RS:5;asf911:40536] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x6b02adc8 to localhost:59178 2018-07-02 07:47:06,515 DEBUG [RS:5;asf911:40536] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:06,515 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1527): Closing 0f545ce4fc7475df98047cbbbf56ffee, disabling compactions & flushes 2018-07-02 07:47:06,516 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1567): Updates disabled for region SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:47:06,516 INFO [RS:5;asf911:40536] regionserver.HRegionServer(1399): Waiting on 1 regions to close 2018-07-02 07:47:06,516 DEBUG [RS:5;asf911:40536] regionserver.HRegionServer(1403): Online Regions={0f545ce4fc7475df98047cbbbf56ffee=SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee.} 2018-07-02 07:47:06,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=40 2018-07-02 07:47:06,550 INFO [regionserver/asf911:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-07-02 07:47:06,550 INFO [regionserver/asf911:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-07-02 07:47:06,578 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(887): Stored pid=42, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,57468,1530516898088, splitWal=true, meta=false 2018-07-02 07:47:06,578 DEBUG [RegionServerTracker-0] assignment.AssignmentManager(1321): Added=asf911.gq1.ygridcore.net,57468,1530516898088 to dead servers, submitted shutdown handler to be executed meta=false 2018-07-02 07:47:06,579 INFO [PEWorker-15] procedure.ServerCrashProcedure(118): Start pid=42, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=asf911.gq1.ygridcore.net,57468,1530516898088, splitWal=true, meta=false 2018-07-02 07:47:06,721 DEBUG [PEWorker-4] master.MasterWalManager(283): Renamed region directory: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414-splitting 2018-07-02 07:47:06,721 INFO [PEWorker-4] master.SplitLogManager(461): dead splitlog workers [asf911.gq1.ygridcore.net,46345,1530516902414] 2018-07-02 07:47:06,721 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] wal.WALSplitter(678): Wrote file=hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/data/default/SyncRep/0f545ce4fc7475df98047cbbbf56ffee/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2018-07-02 07:47:06,723 INFO [RS_CLOSE_REGION-regionserver/asf911:0-0] regionserver.HRegion(1681): Closed SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:47:06,724 DEBUG [RS_CLOSE_REGION-regionserver/asf911:0-0] handler.CloseRegionHandler(124): Closed SyncRep,,1530516874235.0f545ce4fc7475df98047cbbbf56ffee. 2018-07-02 07:47:06,724 INFO [PEWorker-4] master.SplitLogManager(177): hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414-splitting is empty dir, no logs to split 2018-07-02 07:47:06,724 INFO [PEWorker-4] master.SplitLogManager(241): Started splitting 0 logs in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414-splitting] for [asf911.gq1.ygridcore.net,46345,1530516902414] 2018-07-02 07:47:06,726 INFO [PEWorker-4] master.SplitLogManager(293): finished splitting (more than or equal to) 0 bytes in 0 log files in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,46345,1530516902414-splitting] in 2ms 2018-07-02 07:47:06,726 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(235): Done splitting meta WALs pid=41, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,46345,1530516902414, splitWal=true, meta=true 2018-07-02 07:47:06,821 DEBUG [PEWorker-15] procedure.ServerCrashProcedure(239): Splitting WALs pid=42, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,57468,1530516898088, splitWal=true, meta=false 2018-07-02 07:47:06,821 INFO [PEWorker-2] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=43, ppid=41, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740}] 2018-07-02 07:47:06,823 DEBUG [PEWorker-15] master.MasterWalManager(283): Renamed region directory: hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088-splitting 2018-07-02 07:47:06,823 INFO [PEWorker-15] master.SplitLogManager(461): dead splitlog workers [asf911.gq1.ygridcore.net,57468,1530516898088] 2018-07-02 07:47:06,824 INFO [PEWorker-15] master.SplitLogManager(177): hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088-splitting is empty dir, no logs to split 2018-07-02 07:47:06,824 INFO [PEWorker-15] master.SplitLogManager(241): Started splitting 0 logs in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088-splitting] for [asf911.gq1.ygridcore.net,57468,1530516898088] 2018-07-02 07:47:06,826 INFO [PEWorker-15] master.SplitLogManager(293): finished splitting (more than or equal to) 0 bytes in 0 log files in [hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,57468,1530516898088-splitting] in 2ms 2018-07-02 07:47:06,826 DEBUG [PEWorker-15] procedure.ServerCrashProcedure(247): Done splitting WALs pid=42, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS; ServerCrashProcedure server=asf911.gq1.ygridcore.net,57468,1530516898088, splitWal=true, meta=false 2018-07-02 07:47:06,887 INFO [PEWorker-2] procedure.MasterProcedureScheduler(697): pid=43, ppid=41, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 checking lock on 1588230740 2018-07-02 07:47:06,887 INFO [PEWorker-15] procedure2.ProcedureExecutor(1516): Initialized subprocedures=[{pid=44, ppid=42, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba}] 2018-07-02 07:47:06,888 INFO [PEWorker-2] assignment.AssignProcedure(218): Starting pid=43, ppid=41, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=asf911.gq1.ygridcore.net,46345,1530516902414; forceNewPlan=false, retain=true 2018-07-02 07:47:06,917 INFO [RS:5;asf911:40536] regionserver.HRegionServer(1097): stopping server asf911.gq1.ygridcore.net,40536,1530516905630; all regions closed. 2018-07-02 07:47:06,924 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:38320 is added to blk_1073741852_1028{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:47:06,924 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49540 is added to blk_1073741852_1028{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:47:06,924 INFO [Block report processor] blockmanagement.BlockManager(2648): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51748 is added to blk_1073741852_1028{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e6a7f79b-7693-4bbd-acb5-1ab8a4017b82:NORMAL:127.0.0.1:51748|RBW], ReplicaUC[[DISK]DS-bc5fd0d4-25f8-4361-8dfb-5280895b9af8:NORMAL:127.0.0.1:49540|RBW], ReplicaUC[[DISK]DS-33af0b53-15a7-4ec4-abfa-2e00c9359dad:NORMAL:127.0.0.1:38320|RBW]]} size 0 2018-07-02 07:47:06,929 DEBUG [RS:5;asf911:40536] wal.AbstractFSWAL(860): Moved 1 WAL file(s) to /user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs 2018-07-02 07:47:06,929 INFO [RS:5;asf911:40536] wal.AbstractFSWAL(863): Closed WAL: AsyncFSWAL asf911.gq1.ygridcore.net%2C40536%2C1530516905630:(num 1530516906980) 2018-07-02 07:47:06,929 DEBUG [RS:5;asf911:40536] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:06,929 INFO [RS:5;asf911:40536] regionserver.Leases(149): Closed leases 2018-07-02 07:47:06,930 INFO [RS:5;asf911:40536] hbase.ChoreService(327): Chore service for: regionserver/asf911:0 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region asf911.gq1.ygridcore.net,40536,1530516905630 Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-07-02 07:47:06,930 INFO [RS:5;asf911:40536] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-07-02 07:47:06,930 INFO [regionserver/asf911:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-07-02 07:47:06,931 INFO [RS:5;asf911:40536] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-07-02 07:47:06,932 INFO [RS:5;asf911:40536] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-07-02 07:47:06,932 INFO [RS:5;asf911:40536] regionserver.ReplicationSource(481): Closing source 1 because: Region server is closing 2018-07-02 07:47:06,934 INFO [RS:5;asf911:40536.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C40536%2C1530516905630,1] regionserver.WALEntryStream(321): Log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/WALs/asf911.gq1.ygridcore.net,40536,1530516905630/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 was moved to hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 2018-07-02 07:47:06,936 DEBUG [RS:5;asf911:40536.replicationSource.wal-reader.asf911.gq1.ygridcore.net%2C40536%2C1530516905630,1] regionserver.WALEntryStream(250): Reached the end of log hdfs://localhost:42386/user/jenkins/test-data/46bc57b3-3fc9-c6d7-49be-7e67c210d950/oldWALs/asf911.gq1.ygridcore.net%2C40536%2C1530516905630.1530516906980 2018-07-02 07:47:06,984 INFO [PEWorker-15] procedure.MasterProcedureScheduler(697): pid=44, ppid=42, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=d1a74048f8e137b8647beefb747aafba checking lock on d1a74048f8e137b8647beefb747aafba 2018-07-02 07:47:06,986 DEBUG [RS-EventLoopGroup-13-8] ipc.FailedServers(56): Added failed server with address asf911.gq1.ygridcore.net/67.195.81.155:46345 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: asf911.gq1.ygridcore.net/67.195.81.155:46345 2018-07-02 07:47:07,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=44014] master.MasterRpcServices(1144): Checking to see if procedure is done pid=40 2018-07-02 07:47:07,038 INFO [PEWorker-16] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:07,051 INFO [RS:5;asf911:40536] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x13c02235 to localhost:59178 2018-07-02 07:47:07,052 DEBUG [RS:5;asf911:40536] ipc.AbstractRpcClient(483): Stopping rpc client 2018-07-02 07:47:07,053 INFO [RS:5;asf911:40536] regionserver.ReplicationSource(527): ReplicationSourceWorker RS:5;asf911:40536.replicationSource.shipperasf911.gq1.ygridcore.net%2C40536%2C1530516905630,1 terminated 2018-07-02 07:47:07,054 INFO [RS:5;asf911:40536] ipc.NettyRpcServer(144): Stopping server on /67.195.81.155:40536 2018-07-02 07:47:07,060 INFO [PEWorker-16] assignment.RegionTransitionProcedure(241): Dispatch pid=43, ppid=41, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:07,068 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/cluster2/rs/asf911.gq1.ygridcore.net,40536,1530516905630 2018-07-02 07:47:07,068 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:44014-0x16459e9b450000b, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:07,068 DEBUG [Thread-1561-EventThread] zookeeper.ZKWatcher(478): regionserver:40536-0x16459e9b450003d, quorum=localhost:59178, baseZNode=/cluster2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/cluster2/rs 2018-07-02 07:47:07,076 INFO [RS:5;asf911:40536] regionserver.HRegionServer(1153): Exiting; stopping=asf911.gq1.ygridcore.net,40536,1530516905630; zookeeper connection closed. 2018-07-02 07:47:07,076 INFO [RegionServerTracker-0] master.RegionServerTracker(159): RegionServer ephemeral node deleted, processing expiration [asf911.gq1.ygridcore.net,40536,1530516905630] 2018-07-02 07:47:07,077 INFO [RegionServerTracker-0] master.ServerManager(604): Processing expiration of asf911.gq1.ygridcore.net,40536,1530516905630 on asf911.gq1.ygridcore.net,44014,1530516864901 2018-07-02 07:47:07,077 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4cfe75a1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4cfe75a1 2018-07-02 07:47:07,077 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(121): Starting fs shutdown hook thread. 2018-07-02 07:47:07,080 INFO [Thread-4] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished.