2018-12-04 20:48:48,306 DEBUG [main] hbase.HBaseTestingUtility(351): Setting hbase.rootdir to /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc 2018-12-04 20:48:48,328 INFO [Time-limited test] hbase.HBaseTestingUtility(961): Starting up minicluster with 1 master(s) and 3 regionserver(s) and 3 datanode(s) 2018-12-04 20:48:48,329 INFO [Time-limited test] hbase.HBaseZKTestingUtility(85): Created new mini-cluster data directory: /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131, deleteOnExit=true 2018-12-04 20:48:48,329 INFO [Time-limited test] hbase.HBaseTestingUtility(976): STARTING DFS 2018-12-04 20:48:48,330 INFO [Time-limited test] hbase.HBaseTestingUtility(753): Setting test.cache.data to /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cache_data in system properties and HBase conf 2018-12-04 20:48:48,331 INFO [Time-limited test] hbase.HBaseTestingUtility(753): Setting hadoop.tmp.dir to /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/hadoop_tmp in system properties and HBase conf 2018-12-04 20:48:48,331 INFO [Time-limited test] hbase.HBaseTestingUtility(753): Setting hadoop.log.dir to /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/hadoop_logs in system properties and HBase conf 2018-12-04 20:48:48,332 INFO [Time-limited test] hbase.HBaseTestingUtility(753): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/mapred_local in system properties and HBase conf 2018-12-04 20:48:48,332 INFO [Time-limited test] hbase.HBaseTestingUtility(753): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/mapred_temp in system properties and HBase conf 2018-12-04 20:48:48,333 INFO [Time-limited test] hbase.HBaseTestingUtility(744): read short circuit is OFF 2018-12-04 20:48:48,455 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-12-04 20:48:48,995 DEBUG [Time-limited test] fs.HFileSystem(317): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2018-12-04 20:48:50,958 WARN [Time-limited test] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2018-12-04 20:48:51,234 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2018-12-04 20:48:51,315 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-12-04 20:48:51,355 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/local-repository/org/apache/hadoop/hadoop-hdfs/2.7.7/hadoop-hdfs-2.7.7-tests.jar!/webapps/hdfs to /tmp/Jetty_localhost_54312_hdfs____.wx03wh/webapp 2018-12-04 20:48:51,646 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54312 2018-12-04 20:48:53,046 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-12-04 20:48:53,053 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/local-repository/org/apache/hadoop/hadoop-hdfs/2.7.7/hadoop-hdfs-2.7.7-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_34827_datanode____yq0z1f/webapp 2018-12-04 20:48:53,222 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34827 2018-12-04 20:48:53,635 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-12-04 20:48:53,645 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/local-repository/org/apache/hadoop/hadoop-hdfs/2.7.7/hadoop-hdfs-2.7.7-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_57500_datanode____u1pq4a/webapp 2018-12-04 20:48:53,844 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:57500 2018-12-04 20:48:54,696 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2018-12-04 20:48:54,704 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/local-repository/org/apache/hadoop/hadoop-hdfs/2.7.7/hadoop-hdfs-2.7.7-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_46872_datanode____mo9whk/webapp 2018-12-04 20:48:54,927 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46872 2018-12-04 20:48:55,819 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-12-04 20:48:55,856 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-12-04 20:48:55,931 INFO [Block report processor] blockmanagement.BlockManager(1930): BLOCK* processReport 0x2d984d4ef7516d: from storage DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d node DatanodeRegistration(127.0.0.1:33680, datanodeUuid=b4f0e2e4-2a69-4998-9add-8ca52db3c08b, infoPort=59217, infoSecurePort=0, ipcPort=33361, storageInfo=lv=-56;cid=testClusterID;nsid=1721396384;c=0), blocks: 0, hasStaleStorage: true, processing time: 2 msecs 2018-12-04 20:48:55,932 INFO [Block report processor] blockmanagement.BlockManager(1930): BLOCK* processReport 0x2d984d4f2ca8b7: from storage DS-e5e4b851-a625-4939-b76b-08e33db5384e node DatanodeRegistration(127.0.0.1:54375, datanodeUuid=9f9ff2c2-85c2-40ae-982a-9bba1c8f4d95, infoPort=60237, infoSecurePort=0, ipcPort=59129, storageInfo=lv=-56;cid=testClusterID;nsid=1721396384;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2018-12-04 20:48:55,932 INFO [Block report processor] blockmanagement.BlockManager(1930): BLOCK* processReport 0x2d984d4ef7516d: from storage DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5 node DatanodeRegistration(127.0.0.1:33680, datanodeUuid=b4f0e2e4-2a69-4998-9add-8ca52db3c08b, infoPort=59217, infoSecurePort=0, ipcPort=33361, storageInfo=lv=-56;cid=testClusterID;nsid=1721396384;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-12-04 20:48:55,933 INFO [Block report processor] blockmanagement.BlockManager(1930): BLOCK* processReport 0x2d984d4f2ca8b7: from storage DS-1db60017-9ad1-4de0-aa53-b88332f13b9e node DatanodeRegistration(127.0.0.1:54375, datanodeUuid=9f9ff2c2-85c2-40ae-982a-9bba1c8f4d95, infoPort=60237, infoSecurePort=0, ipcPort=59129, storageInfo=lv=-56;cid=testClusterID;nsid=1721396384;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-12-04 20:48:55,967 ERROR [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471] datanode.DirectoryScanner(430): dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 2018-12-04 20:48:55,979 INFO [Block report processor] blockmanagement.BlockManager(1930): BLOCK* processReport 0x2d984d52fba652: from storage DS-5f235008-470b-44c0-8f58-8abc282f11fb node DatanodeRegistration(127.0.0.1:60454, datanodeUuid=0555b898-ca9b-47ca-bb8b-6eb6c6427ac8, infoPort=46229, infoSecurePort=0, ipcPort=33303, storageInfo=lv=-56;cid=testClusterID;nsid=1721396384;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2018-12-04 20:48:55,980 INFO [Block report processor] blockmanagement.BlockManager(1930): BLOCK* processReport 0x2d984d52fba652: from storage DS-13eb77f1-f887-4435-855a-29c30e684eaa node DatanodeRegistration(127.0.0.1:60454, datanodeUuid=0555b898-ca9b-47ca-bb8b-6eb6c6427ac8, infoPort=46229, infoSecurePort=0, ipcPort=33303, storageInfo=lv=-56;cid=testClusterID;nsid=1721396384;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2018-12-04 20:48:56,012 DEBUG [Time-limited test] hbase.HBaseTestingUtility(679): Setting hbase.rootdir to /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc 2018-12-04 20:48:56,085 ERROR [Time-limited test] server.ZooKeeperServer(472): ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2018-12-04 20:48:56,108 INFO [Time-limited test] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran successful 'stat' on client port=64381 2018-12-04 20:48:56,120 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:56,123 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:56,511 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 7 2018-12-04 20:48:56,512 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741825_1001 size 7 2018-12-04 20:48:56,512 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741825_1001 size 7 2018-12-04 20:48:57,123 INFO [Time-limited test] util.FSUtils(515): Created version file at hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 with version=8 2018-12-04 20:48:57,124 INFO [Time-limited test] hbase.HBaseTestingUtility(1242): Setting hbase.fs.tmp.dir to hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase-staging 2018-12-04 20:48:57,358 INFO [Time-limited test] metrics.MetricRegistriesLoader(66): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2018-12-04 20:48:57,644 INFO [Time-limited test] client.ConnectionUtils(122): master/asf910:0 server-side Connection retries=18 2018-12-04 20:48:57,673 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-12-04 20:48:57,675 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=6 2018-12-04 20:48:57,675 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=3 2018-12-04 20:48:57,816 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientService, hbase.pb.AdminService 2018-12-04 20:48:58,050 DEBUG [Time-limited test] util.ClassSize(229): Using Unsafe to estimate memory layout 2018-12-04 20:48:58,155 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.154:53736 2018-12-04 20:48:58,173 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: CacheConfig:disabled 2018-12-04 20:48:58,174 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: CacheConfig:disabled 2018-12-04 20:48:58,179 DEBUG [Time-limited test] mob.MobFileCache(123): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2018-12-04 20:48:58,181 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:58,186 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:58,623 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=master:53736 connecting to ZooKeeper ensemble=localhost:64381 2018-12-04 20:48:58,725 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:537360x0, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-12-04 20:48:58,730 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): master:53736-0x1677afb1afa0000 connected 2018-12-04 20:48:58,884 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2018-12-04 20:48:58,885 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:48:58,895 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=53736 2018-12-04 20:48:58,896 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=53736 2018-12-04 20:48:58,897 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=53736 2018-12-04 20:48:58,908 INFO [Time-limited test] master.HMaster(504): hbase.rootdir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960, hbase.cluster.distributed=false 2018-12-04 20:48:59,071 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf910:0 server-side Connection retries=18 2018-12-04 20:48:59,072 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-12-04 20:48:59,073 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=6 2018-12-04 20:48:59,073 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=3 2018-12-04 20:48:59,095 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-12-04 20:48:59,099 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-12-04 20:48:59,109 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.154:34504 2018-12-04 20:48:59,110 INFO [Time-limited test] hfile.CacheConfig(575): Allocating onheap LruBlockCache size=995.60 MB, blockSize=64 KB 2018-12-04 20:48:59,122 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,123 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,128 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:59,135 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:59,140 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:34504 connecting to ZooKeeper ensemble=localhost:64381 2018-12-04 20:48:59,158 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:345040x0, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-12-04 20:48:59,165 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:34504-0x1677afb1afa0001 connected 2018-12-04 20:48:59,165 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2018-12-04 20:48:59,166 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:48:59,168 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34504 2018-12-04 20:48:59,169 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=34504 2018-12-04 20:48:59,169 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34504 2018-12-04 20:48:59,205 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf910:0 server-side Connection retries=18 2018-12-04 20:48:59,206 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-12-04 20:48:59,206 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=6 2018-12-04 20:48:59,206 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=3 2018-12-04 20:48:59,207 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-12-04 20:48:59,207 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-12-04 20:48:59,211 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.154:51486 2018-12-04 20:48:59,213 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,215 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,219 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:59,235 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:59,244 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:51486 connecting to ZooKeeper ensemble=localhost:64381 2018-12-04 20:48:59,257 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:514860x0, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-12-04 20:48:59,259 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:51486-0x1677afb1afa0002 connected 2018-12-04 20:48:59,259 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2018-12-04 20:48:59,260 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:48:59,262 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=51486 2018-12-04 20:48:59,263 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=51486 2018-12-04 20:48:59,263 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=51486 2018-12-04 20:48:59,306 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/asf910:0 server-side Connection retries=18 2018-12-04 20:48:59,306 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=5 2018-12-04 20:48:59,307 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=6 2018-12-04 20:48:59,307 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=50, handlerCount=3 2018-12-04 20:48:59,308 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-12-04 20:48:59,309 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=320 B 2018-12-04 20:48:59,320 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /67.195.81.154:36011 2018-12-04 20:48:59,321 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,322 INFO [Time-limited test] hfile.CacheConfig(263): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,324 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:59,327 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:48:59,329 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:36011 connecting to ZooKeeper ensemble=localhost:64381 2018-12-04 20:48:59,341 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:360110x0, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-12-04 20:48:59,341 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:36011-0x1677afb1afa0003 connected 2018-12-04 20:48:59,342 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2018-12-04 20:48:59,342 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:48:59,351 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=5 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36011 2018-12-04 20:48:59,352 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=6 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=1, port=36011 2018-12-04 20:48:59,379 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36011 2018-12-04 20:48:59,385 INFO [master/asf910:0:becomeActiveMaster] master.HMaster(2244): Adding backup master ZNode /hbase/backup-masters/asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:48:59,410 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(355): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:48:59,450 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2018-12-04 20:48:59,454 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2018-12-04 20:48:59,466 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2018-12-04 20:48:59,454 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2018-12-04 20:48:59,489 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(355): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2018-12-04 20:48:59,493 INFO [master/asf910:0:becomeActiveMaster] master.ActiveMasterManager(172): Deleting ZNode for /hbase/backup-masters/asf910.gq1.ygridcore.net,53736,1543956537196 from backup master directory 2018-12-04 20:48:59,496 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2018-12-04 20:48:59,507 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:48:59,508 WARN [master/asf910:0:becomeActiveMaster] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-12-04 20:48:59,509 INFO [master/asf910:0:becomeActiveMaster] master.ActiveMasterManager(181): Registered as active master=asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:48:59,667 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|FINALIZED]]} size 0 2018-12-04 20:48:59,668 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|FINALIZED], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:48:59,690 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|FINALIZED], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:48:59,694 DEBUG [master/asf910:0:becomeActiveMaster] util.FSUtils(667): Created cluster ID file at hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase.id with ID: 31cc6ab8-92e1-4aad-b7af-cbdb7b7ee6f4 2018-12-04 20:48:59,740 INFO [master/asf910:0:becomeActiveMaster] master.MasterFileSystem(396): BOOTSTRAP: creating hbase:meta region 2018-12-04 20:48:59,746 INFO [master/asf910:0:becomeActiveMaster] regionserver.HRegion(7003): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'false', BLOCKSIZE => '8192'}, {NAME => 'rep_barrier', VERSIONS => '2147483647', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'table', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 Table name == hbase:meta 2018-12-04 20:48:59,786 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|FINALIZED]]} size 0 2018-12-04 20:48:59,788 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|FINALIZED]]} size 0 2018-12-04 20:48:59,789 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|FINALIZED], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:48:59,797 DEBUG [master/asf910:0:becomeActiveMaster] regionserver.HRegion(833): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:48:59,850 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/info 2018-12-04 20:48:59,874 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(237): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,889 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:48:59,917 INFO [StoreOpener-1588230740-1] regionserver.HStore(332): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:48:59,923 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/rep_barrier 2018-12-04 20:48:59,924 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(237): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,925 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:48:59,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(332): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:48:59,931 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/table 2018-12-04 20:48:59,932 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(237): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:48:59,933 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:48:59,935 INFO [StoreOpener-1588230740-1] regionserver.HStore(332): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:48:59,950 DEBUG [master/asf910:0:becomeActiveMaster] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740 2018-12-04 20:48:59,952 DEBUG [master/asf910:0:becomeActiveMaster] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase/meta/1588230740 2018-12-04 20:48:59,970 DEBUG [master/asf910:0:becomeActiveMaster] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-12-04 20:48:59,973 DEBUG [master/asf910:0:becomeActiveMaster] regionserver.HRegion(998): writing seq id for 1588230740 2018-12-04 20:48:59,994 DEBUG [master/asf910:0:becomeActiveMaster] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:48:59,994 INFO [master/asf910:0:becomeActiveMaster] regionserver.HRegion(1002): Opened 1588230740; next sequenceid=2 2018-12-04 20:48:59,994 DEBUG [master/asf910:0:becomeActiveMaster] regionserver.HRegion(1541): Closing 1588230740, disabling compactions & flushes 2018-12-04 20:48:59,995 DEBUG [master/asf910:0:becomeActiveMaster] regionserver.HRegion(1581): Updates disabled for region hbase:meta,,1.1588230740 2018-12-04 20:48:59,996 INFO [master/asf910:0:becomeActiveMaster] regionserver.HRegion(1698): Closed hbase:meta,,1.1588230740 2018-12-04 20:49:00,139 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:00,140 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:00,140 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:00,146 DEBUG [master/asf910:0:becomeActiveMaster] util.FSTableDescriptors(684): Wrote into hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2018-12-04 20:49:00,212 INFO [master/asf910:0:becomeActiveMaster] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-12-04 20:49:00,231 INFO [master/asf910:0:becomeActiveMaster] coordination.ZKSplitLogManagerCoordination(494): Found 0 orphan tasks and 0 rescan nodes 2018-12-04 20:49:00,343 INFO [master/asf910:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3366ddb3 to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:00,420 DEBUG [master/asf910:0:becomeActiveMaster] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66dd3566, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:00,460 INFO [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(588): Starting 16 core workers (bigger of cpus/4 or 16) with max (burst) worker count=160, start 1 urgent thread(s) 2018-12-04 20:49:00,463 DEBUG [master/asf910:0:becomeActiveMaster] wal.WALProcedureStore(402): Starting WAL Procedure Store lease recovery 2018-12-04 20:49:00,470 DEBUG [master/asf910:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(877): org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder not available, will not use builder API for file creation. 2018-12-04 20:49:00,475 WARN [master/asf910:0:becomeActiveMaster] util.CommonFSUtils$StreamCapabilities(994): Your Hadoop installation does not include the StreamCapabilities class from HDFS-11644, so we will skip checking if any FSDataOutputStreams actually support hflush/hsync. If you are running on top of HDFS this probably just means you have an older version and this can be ignored. If you are running on top of an alternate FileSystem implementation you should manually verify that hflush and hsync are implemented; otherwise you risk data loss and hard to diagnose errors when our assumptions are violated. 2018-12-04 20:49:00,476 DEBUG [master/asf910:0:becomeActiveMaster] util.CommonFSUtils$StreamCapabilities(1001): The first request to check for StreamCapabilities came from this stacktrace. java.lang.ClassNotFoundException: org.apache.hadoop.fs.StreamCapabilities at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.util.CommonFSUtils$StreamCapabilities.(CommonFSUtils.java:990) at org.apache.hadoop.hbase.util.CommonFSUtils.hasCapability(CommonFSUtils.java:1028) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1085) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:611) at org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1457) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:895) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2264) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:581) at java.lang.Thread.run(Thread.java:748) 2018-12-04 20:49:00,481 INFO [master/asf910:0:becomeActiveMaster] wal.WALProcedureStore(1130): Rolled new Procedure Store WAL, id=1 2018-12-04 20:49:00,483 DEBUG [master/asf910:0:becomeActiveMaster] wal.WALProcedureStore(437): Lease acquired for flushLogId=1 2018-12-04 20:49:00,483 INFO [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(613): Recovered WALProcedureStore lease in 20msec 2018-12-04 20:49:00,485 DEBUG [master/asf910:0:becomeActiveMaster] wal.WALProcedureStore(455): No state logs to replay. 2018-12-04 20:49:00,485 INFO [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(627): Loaded WALProcedureStore in 1msec 2018-12-04 20:49:00,485 INFO [master/asf910:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(97): Instantiated, coreThreads=128 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2018-12-04 20:49:00,539 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(614): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error) 2018-12-04 20:49:00,551 INFO [master/asf910:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2018-12-04 20:49:00,591 INFO [master/asf910:0:becomeActiveMaster] balancer.BaseLoadBalancer(1035): slop=0.001, systemTablesOnMaster=false 2018-12-04 20:49:00,598 INFO [master/asf910:0:becomeActiveMaster] balancer.StochasticLoadBalancer(213): Loaded config; maxSteps=1000000, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, etc. 2018-12-04 20:49:00,603 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2018-12-04 20:49:00,604 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2018-12-04 20:49:00,633 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2018-12-04 20:49:00,634 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2018-12-04 20:49:00,665 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2018-12-04 20:49:00,666 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2018-12-04 20:49:00,665 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2018-12-04 20:49:00,666 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2018-12-04 20:49:00,667 INFO [master/asf910:0:becomeActiveMaster] master.HMaster(795): Active/primary master=asf910.gq1.ygridcore.net,53736,1543956537196, sessionid=0x1677afb1afa0000, setting cluster-up flag (Was=false) 2018-12-04 20:49:00,723 DEBUG [master/asf910:0:becomeActiveMaster] procedure.ZKProcedureUtil(272): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2018-12-04 20:49:00,726 DEBUG [master/asf910:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:49:00,773 DEBUG [master/asf910:0:becomeActiveMaster] procedure.ZKProcedureUtil(272): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2018-12-04 20:49:00,775 DEBUG [master/asf910:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:49:00,809 INFO [RS:0;asf910:34504] regionserver.HRegionServer(878): ClusterId : 31cc6ab8-92e1-4aad-b7af-cbdb7b7ee6f4 2018-12-04 20:49:00,810 INFO [RS:2;asf910:36011] regionserver.HRegionServer(878): ClusterId : 31cc6ab8-92e1-4aad-b7af-cbdb7b7ee6f4 2018-12-04 20:49:00,809 INFO [RS:1;asf910:51486] regionserver.HRegionServer(878): ClusterId : 31cc6ab8-92e1-4aad-b7af-cbdb7b7ee6f4 2018-12-04 20:49:00,816 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-12-04 20:49:00,816 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-12-04 20:49:00,816 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-12-04 20:49:00,875 INFO [master/asf910:0:becomeActiveMaster] master.HMaster(946): hbase:meta {1588230740 state=OFFLINE, ts=1543956540541, server=null} 2018-12-04 20:49:00,893 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-12-04 20:49:00,893 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-12-04 20:49:00,893 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-12-04 20:49:00,893 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-12-04 20:49:00,894 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-12-04 20:49:00,893 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-12-04 20:49:00,966 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-12-04 20:49:00,966 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-12-04 20:49:00,967 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-12-04 20:49:00,969 INFO [RS:0;asf910:34504] zookeeper.ReadOnlyZKClient(139): Connect 0x30c17016 to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:00,970 INFO [RS:2;asf910:36011] zookeeper.ReadOnlyZKClient(139): Connect 0x1dadac8c to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:00,970 INFO [RS:1;asf910:51486] zookeeper.ReadOnlyZKClient(139): Connect 0x0a5c979d to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:00,991 DEBUG [RS:0;asf910:34504] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e06887e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:00,993 DEBUG [RS:1;asf910:51486] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49b68a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:00,991 DEBUG [RS:2;asf910:36011] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@136a5149, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:00,995 DEBUG [RS:0;asf910:34504] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3a391b66, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf910.gq1.ygridcore.net/67.195.81.154:0 2018-12-04 20:49:00,995 DEBUG [RS:2;asf910:36011] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1755c7fc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf910.gq1.ygridcore.net/67.195.81.154:0 2018-12-04 20:49:00,995 DEBUG [RS:1;asf910:51486] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a4fced, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf910.gq1.ygridcore.net/67.195.81.154:0 2018-12-04 20:49:01,000 DEBUG [RS:2;asf910:36011] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:2;asf910:36011 2018-12-04 20:49:01,000 DEBUG [RS:0;asf910:34504] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:0;asf910:34504 2018-12-04 20:49:01,000 DEBUG [RS:1;asf910:51486] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:1;asf910:51486 2018-12-04 20:49:01,005 INFO [RS:2;asf910:36011] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-12-04 20:49:01,006 INFO [RS:2;asf910:36011] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-12-04 20:49:01,006 INFO [RS:1;asf910:51486] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-12-04 20:49:01,007 INFO [RS:1;asf910:51486] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-12-04 20:49:01,007 DEBUG [RS:1;asf910:51486] regionserver.HRegionServer(947): About to register with Master. 2018-12-04 20:49:01,005 INFO [RS:0;asf910:34504] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-12-04 20:49:01,007 INFO [RS:0;asf910:34504] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-12-04 20:49:01,007 DEBUG [RS:2;asf910:36011] regionserver.HRegionServer(947): About to register with Master. 2018-12-04 20:49:01,007 DEBUG [RS:0;asf910:34504] regionserver.HRegionServer(947): About to register with Master. 2018-12-04 20:49:01,010 INFO [RS:0;asf910:34504] regionserver.HRegionServer(2593): reportForDuty to master=asf910.gq1.ygridcore.net,53736,1543956537196 with port=34504, startcode=1543956539068 2018-12-04 20:49:01,010 INFO [RS:1;asf910:51486] regionserver.HRegionServer(2593): reportForDuty to master=asf910.gq1.ygridcore.net,53736,1543956537196 with port=51486, startcode=1543956539203 2018-12-04 20:49:01,010 INFO [RS:2;asf910:36011] regionserver.HRegionServer(2593): reportForDuty to master=asf910.gq1.ygridcore.net,53736,1543956537196 with port=36011, startcode=1543956539302 2018-12-04 20:49:01,160 INFO [RS-EventLoopGroup-1-4] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:52963, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2018-12-04 20:49:01,160 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:41259, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2018-12-04 20:49:01,160 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:48728, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2018-12-04 20:49:01,176 DEBUG [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(1092): Stored pid=1, state=RUNNABLE:INIT_META_ASSIGN_META; InitMetaProcedure table=hbase:meta 2018-12-04 20:49:01,183 DEBUG [master/asf910:0:becomeActiveMaster] procedure.MasterProcedureScheduler(356): Add MetaQueue(hbase:meta, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=1, state=RUNNABLE:INIT_META_ASSIGN_META; InitMetaProcedure table=hbase:meta 2018-12-04 20:49:01,211 DEBUG [master/asf910:0:becomeActiveMaster] executor.ExecutorService(92): Starting executor service name=MASTER_OPEN_REGION-master/asf910:0, corePoolSize=5, maxPoolSize=5 2018-12-04 20:49:01,212 DEBUG [master/asf910:0:becomeActiveMaster] executor.ExecutorService(92): Starting executor service name=MASTER_CLOSE_REGION-master/asf910:0, corePoolSize=5, maxPoolSize=5 2018-12-04 20:49:01,212 DEBUG [master/asf910:0:becomeActiveMaster] executor.ExecutorService(92): Starting executor service name=MASTER_SERVER_OPERATIONS-master/asf910:0, corePoolSize=5, maxPoolSize=5 2018-12-04 20:49:01,212 DEBUG [master/asf910:0:becomeActiveMaster] executor.ExecutorService(92): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/asf910:0, corePoolSize=5, maxPoolSize=5 2018-12-04 20:49:01,213 DEBUG [master/asf910:0:becomeActiveMaster] executor.ExecutorService(92): Starting executor service name=M_LOG_REPLAY_OPS-master/asf910:0, corePoolSize=10, maxPoolSize=10 2018-12-04 20:49:01,213 DEBUG [master/asf910:0:becomeActiveMaster] executor.ExecutorService(92): Starting executor service name=MASTER_TABLE_OPERATIONS-master/asf910:0, corePoolSize=1, maxPoolSize=1 2018-12-04 20:49:01,213 DEBUG [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(640): Start workers 16, urgent workers 1 2018-12-04 20:49:01,219 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(366): Remove MetaQueue(hbase:meta, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=1, state=RUNNABLE:INIT_META_ASSIGN_META; InitMetaProcedure table=hbase:meta 2018-12-04 20:49:01,221 INFO [master/asf910:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1543956571221 2018-12-04 20:49:01,223 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:meta, xlock=true (1) sharedLock=0 size=0) from run queue because: pid=1, state=RUNNABLE:INIT_META_ASSIGN_META; InitMetaProcedure table=hbase:meta held the exclusive lock 2018-12-04 20:49:01,224 INFO [master/asf910:0:becomeActiveMaster] cleaner.CleanerChore$DirScanPool(90): Cleaner pool size is 4 2018-12-04 20:49:01,226 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2018-12-04 20:49:01,227 INFO [master/asf910:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(106): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:64381 2018-12-04 20:49:01,228 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2018-12-04 20:49:01,229 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2018-12-04 20:49:01,229 INFO [master/asf910:0:becomeActiveMaster] cleaner.LogCleaner(155): Creating OldWALs cleaners with size=2 2018-12-04 20:49:01,237 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2018-12-04 20:49:01,239 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2018-12-04 20:49:01,240 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2018-12-04 20:49:01,242 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.HFileCleaner(225): Starting for large file=Thread[master/asf910:0:becomeActiveMaster-HFileCleaner.large.0-1543956541242,5,FailOnTimeoutGroup] 2018-12-04 20:49:01,242 DEBUG [master/asf910:0:becomeActiveMaster] cleaner.HFileCleaner(240): Starting for small files=Thread[master/asf910:0:becomeActiveMaster-HFileCleaner.small.0-1543956541242,5,FailOnTimeoutGroup] 2018-12-04 20:49:01,247 DEBUG [RS:0;asf910:34504] regionserver.HRegionServer(2613): Master is not running yet 2018-12-04 20:49:01,247 DEBUG [RS:1;asf910:51486] regionserver.HRegionServer(2613): Master is not running yet 2018-12-04 20:49:01,247 DEBUG [RS:2;asf910:36011] regionserver.HRegionServer(2613): Master is not running yet 2018-12-04 20:49:01,248 WARN [RS:2;asf910:36011] regionserver.HRegionServer(955): reportForDuty failed; sleeping 100 ms and then retrying. 2018-12-04 20:49:01,248 WARN [RS:1;asf910:51486] regionserver.HRegionServer(955): reportForDuty failed; sleeping 100 ms and then retrying. 2018-12-04 20:49:01,248 WARN [RS:0;asf910:34504] regionserver.HRegionServer(955): reportForDuty failed; sleeping 100 ms and then retrying. 2018-12-04 20:49:01,265 DEBUG [master/asf910:0:becomeActiveMaster-EventThread] zookeeper.ZKWatcher(478): replicationLogCleaner0x0, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-12-04 20:49:01,267 DEBUG [master/asf910:0:becomeActiveMaster-EventThread] zookeeper.ZKWatcher(543): replicationLogCleaner-0x1677afb1afa0008 connected 2018-12-04 20:49:01,349 INFO [RS:2;asf910:36011] regionserver.HRegionServer(2593): reportForDuty to master=asf910.gq1.ygridcore.net,53736,1543956537196 with port=36011, startcode=1543956539302 2018-12-04 20:49:01,349 INFO [RS:0;asf910:34504] regionserver.HRegionServer(2593): reportForDuty to master=asf910.gq1.ygridcore.net,53736,1543956537196 with port=34504, startcode=1543956539068 2018-12-04 20:49:01,349 INFO [RS:1;asf910:51486] regionserver.HRegionServer(2593): reportForDuty to master=asf910.gq1.ygridcore.net,53736,1543956537196 with port=51486, startcode=1543956539203 2018-12-04 20:49:01,367 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.ServerManager(403): Registering regionserver=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,367 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] master.ServerManager(403): Registering regionserver=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.ServerManager(403): Registering regionserver=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,370 INFO [PEWorker-1] procedure2.ProcedureExecutor(1758): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740}] 2018-12-04 20:49:01,372 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=1, state=WAITING, locked=true; InitMetaProcedure table=hbase:meta as the 0th rollback step 2018-12-04 20:49:01,381 DEBUG [RS:1;asf910:51486] regionserver.HRegionServer(1490): Config from master: hbase.rootdir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:01,381 DEBUG [RS:0;asf910:34504] regionserver.HRegionServer(1490): Config from master: hbase.rootdir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:01,381 DEBUG [RS:2;asf910:36011] regionserver.HRegionServer(1490): Config from master: hbase.rootdir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:01,383 DEBUG [RS:0;asf910:34504] regionserver.HRegionServer(1490): Config from master: fs.defaultFS=hdfs://localhost:45471 2018-12-04 20:49:01,381 DEBUG [RS:1;asf910:51486] regionserver.HRegionServer(1490): Config from master: fs.defaultFS=hdfs://localhost:45471 2018-12-04 20:49:01,383 DEBUG [RS:0;asf910:34504] regionserver.HRegionServer(1490): Config from master: hbase.master.info.port=-1 2018-12-04 20:49:01,383 DEBUG [RS:2;asf910:36011] regionserver.HRegionServer(1490): Config from master: fs.defaultFS=hdfs://localhost:45471 2018-12-04 20:49:01,383 DEBUG [RS:1;asf910:51486] regionserver.HRegionServer(1490): Config from master: hbase.master.info.port=-1 2018-12-04 20:49:01,384 DEBUG [RS:2;asf910:36011] regionserver.HRegionServer(1490): Config from master: hbase.master.info.port=-1 2018-12-04 20:49:01,466 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2018-12-04 20:49:01,482 DEBUG [RS:1;asf910:51486] zookeeper.ZKUtil(355): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,483 WARN [RS:1;asf910:51486] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-12-04 20:49:01,483 DEBUG [RS:2;asf910:36011] zookeeper.ZKUtil(355): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,484 WARN [RS:2;asf910:36011] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-12-04 20:49:01,491 INFO [RegionServerTracker-0] master.RegionServerTracker(182): RegionServer ephemeral node created, adding [asf910.gq1.ygridcore.net,51486,1543956539203] 2018-12-04 20:49:01,491 INFO [RegionServerTracker-0] master.RegionServerTracker(182): RegionServer ephemeral node created, adding [asf910.gq1.ygridcore.net,36011,1543956539302] 2018-12-04 20:49:01,491 INFO [RegionServerTracker-0] master.RegionServerTracker(182): RegionServer ephemeral node created, adding [asf910.gq1.ygridcore.net,34504,1543956539068] 2018-12-04 20:49:01,492 DEBUG [RS:0;asf910:34504] zookeeper.ZKUtil(355): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,493 WARN [RS:0;asf910:34504] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-12-04 20:49:01,515 DEBUG [RS:1;asf910:51486] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(421): org.apache.hadoop.hdfs.protocolPB.PBHelperClient not found (Hadoop is pre-2.8.0?); using class org.apache.hadoop.hdfs.protocolPB.PBHelper instead. 2018-12-04 20:49:01,520 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(356): Add MetaQueue(hbase:meta, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 2018-12-04 20:49:01,520 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(366): Remove MetaQueue(hbase:meta, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 2018-12-04 20:49:01,522 INFO [PEWorker-9] procedure.MasterProcedureScheduler(741): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 2018-12-04 20:49:01,584 DEBUG [RS:1;asf910:51486] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(494): No DfsClientConf class found, should be hadoop 2.7- java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.client.impl.DfsClientConf at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createChecksumCreater(FanOutOneBlockAsyncDFSOutputHelper.java:492) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:556) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:136) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:136) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:198) at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1794) at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1512) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:156) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:958) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:183) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:129) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:164) at java.lang.Thread.run(Thread.java:748) 2018-12-04 20:49:01,586 DEBUG [RS:1;asf910:51486] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(528): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2018-12-04 20:49:01,591 DEBUG [RS:1;asf910:51486] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(196): No PBHelperClient class found, should be hadoop 2.7- java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.protocolPB.PBHelperClient at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createPBHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:194) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:299) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:137) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:136) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:198) at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1794) at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1512) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:156) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:958) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:183) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:129) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:164) at java.lang.Thread.run(Thread.java:748) 2018-12-04 20:49:01,593 INFO [RS:1;asf910:51486] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-12-04 20:49:01,593 INFO [RS:2;asf910:36011] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-12-04 20:49:01,593 INFO [RS:0;asf910:34504] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-12-04 20:49:01,601 DEBUG [RS:1;asf910:51486] regionserver.HRegionServer(1801): logDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,601 DEBUG [RS:0;asf910:34504] regionserver.HRegionServer(1801): logDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,601 DEBUG [RS:2;asf910:36011] regionserver.HRegionServer(1801): logDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,635 DEBUG [RS:0;asf910:34504] zookeeper.ZKUtil(355): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,636 DEBUG [RS:1;asf910:51486] zookeeper.ZKUtil(355): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,635 DEBUG [RS:2;asf910:36011] zookeeper.ZKUtil(355): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,636 DEBUG [RS:0;asf910:34504] zookeeper.ZKUtil(355): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,636 DEBUG [RS:1;asf910:51486] zookeeper.ZKUtil(355): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,637 DEBUG [RS:2;asf910:36011] zookeeper.ZKUtil(355): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,637 DEBUG [RS:0;asf910:34504] zookeeper.ZKUtil(355): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,640 DEBUG [RS:1;asf910:51486] zookeeper.ZKUtil(355): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,640 DEBUG [RS:2;asf910:36011] zookeeper.ZKUtil(355): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,649 DEBUG [RS:1;asf910:51486] regionserver.Replication(131): Replication stats-in-log period=300 seconds 2018-12-04 20:49:01,650 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:meta, xlock=false sharedLock=1 size=0) to run queue because: pid=1, state=WAITING; InitMetaProcedure table=hbase:meta released the exclusive lock 2018-12-04 20:49:01,650 DEBUG [RS:2;asf910:36011] regionserver.Replication(131): Replication stats-in-log period=300 seconds 2018-12-04 20:49:01,659 DEBUG [RS:0;asf910:34504] regionserver.Replication(131): Replication stats-in-log period=300 seconds 2018-12-04 20:49:01,659 INFO [RS:1;asf910:51486] regionserver.MetricsRegionServerWrapperImpl(144): Computing regionserver metrics every 5000 milliseconds 2018-12-04 20:49:01,660 INFO [RS:2;asf910:36011] regionserver.MetricsRegionServerWrapperImpl(144): Computing regionserver metrics every 5000 milliseconds 2018-12-04 20:49:01,662 INFO [RS:0;asf910:34504] regionserver.MetricsRegionServerWrapperImpl(144): Computing regionserver metrics every 5000 milliseconds 2018-12-04 20:49:01,663 INFO [PEWorker-9] assignment.AssignProcedure(254): Starting pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; forceNewPlan=false, retain=false 2018-12-04 20:49:01,664 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 as the 1th rollback step 2018-12-04 20:49:01,710 INFO [RS:2;asf910:36011] regionserver.MemStoreFlusher(132): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-12-04 20:49:01,711 INFO [RS:0;asf910:34504] regionserver.MemStoreFlusher(132): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-12-04 20:49:01,710 INFO [RS:1;asf910:51486] regionserver.MemStoreFlusher(132): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-12-04 20:49:01,764 INFO [RS:0;asf910:34504] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-12-04 20:49:01,765 INFO [RS:2;asf910:36011] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-12-04 20:49:01,764 INFO [RS:1;asf910:51486] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-12-04 20:49:01,768 INFO [RS:2;asf910:36011] regionserver.HRegionServer$CompactionChecker(1690): CompactionChecker runs every PT1S 2018-12-04 20:49:01,768 INFO [RS:1;asf910:51486] regionserver.HRegionServer$CompactionChecker(1690): CompactionChecker runs every PT1S 2018-12-04 20:49:01,768 INFO [RS:0;asf910:34504] regionserver.HRegionServer$CompactionChecker(1690): CompactionChecker runs every PT1S 2018-12-04 20:49:01,786 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,786 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,787 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf910:0, corePoolSize=1, maxPoolSize=1 2018-12-04 20:49:01,787 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,787 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,787 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf910:0, corePoolSize=1, maxPoolSize=1 2018-12-04 20:49:01,788 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf910:0, corePoolSize=2, maxPoolSize=2 2018-12-04 20:49:01,788 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0, corePoolSize=10, maxPoolSize=10 2018-12-04 20:49:01,788 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,788 DEBUG [RS:2;asf910:36011] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf910:0, corePoolSize=2, maxPoolSize=2 2018-12-04 20:49:01,787 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_OPEN_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,789 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf910:0, corePoolSize=1, maxPoolSize=1 2018-12-04 20:49:01,789 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,789 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,789 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf910:0, corePoolSize=1, maxPoolSize=1 2018-12-04 20:49:01,790 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf910:0, corePoolSize=2, maxPoolSize=2 2018-12-04 20:49:01,790 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0, corePoolSize=10, maxPoolSize=10 2018-12-04 20:49:01,790 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,791 DEBUG [RS:1;asf910:51486] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf910:0, corePoolSize=2, maxPoolSize=2 2018-12-04 20:49:01,787 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_OPEN_META-regionserver/asf910:0, corePoolSize=1, maxPoolSize=1 2018-12-04 20:49:01,794 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,795 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_REGION-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,795 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_CLOSE_META-regionserver/asf910:0, corePoolSize=1, maxPoolSize=1 2018-12-04 20:49:01,796 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/asf910:0, corePoolSize=2, maxPoolSize=2 2018-12-04 20:49:01,796 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0, corePoolSize=10, maxPoolSize=10 2018-12-04 20:49:01,797 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/asf910:0, corePoolSize=3, maxPoolSize=3 2018-12-04 20:49:01,797 DEBUG [RS:0;asf910:34504] executor.ExecutorService(92): Starting executor service name=RS_REFRESH_PEER-regionserver/asf910:0, corePoolSize=2, maxPoolSize=2 2018-12-04 20:49:01,818 DEBUG [master/asf910:0] assignment.AssignmentManager(1707): Processing assignQueue; systemServersCount=3, allServersCount=3 2018-12-04 20:49:01,836 INFO [SplitLogWorker-asf910:36011] regionserver.SplitLogWorker(136): SplitLogWorker asf910.gq1.ygridcore.net,36011,1543956539302 starting 2018-12-04 20:49:01,842 INFO [SplitLogWorker-asf910:51486] regionserver.SplitLogWorker(136): SplitLogWorker asf910.gq1.ygridcore.net,51486,1543956539203 starting 2018-12-04 20:49:01,848 INFO [RS:2;asf910:36011] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-12-04 20:49:01,848 INFO [RS:1;asf910:51486] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-12-04 20:49:01,854 INFO [RS:1;asf910:51486] regionserver.ChunkCreator(499): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 448, initial count 0 2018-12-04 20:49:01,854 INFO [RS:2;asf910:36011] regionserver.ChunkCreator(499): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 448, initial count 0 2018-12-04 20:49:01,857 INFO [RS:2;asf910:36011] regionserver.ChunkCreator(499): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 497, initial count 0 2018-12-04 20:49:01,857 INFO [RS:1;asf910:51486] regionserver.ChunkCreator(499): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 497, initial count 0 2018-12-04 20:49:01,860 INFO [RS:0;asf910:34504] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-12-04 20:49:01,860 INFO [SplitLogWorker-asf910:34504] regionserver.SplitLogWorker(136): SplitLogWorker asf910.gq1.ygridcore.net,34504,1543956539068 starting 2018-12-04 20:49:01,871 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add MetaQueue(hbase:meta, xlock=false sharedLock=0 size=1) to run queue because: pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 has lock 2018-12-04 20:49:01,889 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove MetaQueue(hbase:meta, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 2018-12-04 20:49:01,889 WARN [PEWorker-12] assignment.AssignmentManager(1056): Why is ServerStateNode for asf910.gq1.ygridcore.net,51486,1543956539203 empty at this point? Creating... 2018-12-04 20:49:01,896 INFO [PEWorker-12] assignment.AssignProcedure(282): Early suspend! pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,897 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 as the 2th rollback step 2018-12-04 20:49:01,909 INFO [RS:2;asf910:36011] regionserver.HRegionServer(1531): Serving as asf910.gq1.ygridcore.net,36011,1543956539302, RpcServer on asf910.gq1.ygridcore.net/67.195.81.154:36011, sessionid=0x1677afb1afa0003 2018-12-04 20:49:01,909 INFO [RS:1;asf910:51486] regionserver.HRegionServer(1531): Serving as asf910.gq1.ygridcore.net,51486,1543956539203, RpcServer on asf910.gq1.ygridcore.net/67.195.81.154:51486, sessionid=0x1677afb1afa0002 2018-12-04 20:49:01,910 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-12-04 20:49:01,910 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-12-04 20:49:01,910 DEBUG [RS:2;asf910:36011] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,910 DEBUG [RS:1;asf910:51486] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,911 DEBUG [RS:2;asf910:36011] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf910.gq1.ygridcore.net,36011,1543956539302' 2018-12-04 20:49:01,911 DEBUG [RS:2;asf910:36011] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2018-12-04 20:49:01,911 DEBUG [RS:1;asf910:51486] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf910.gq1.ygridcore.net,51486,1543956539203' 2018-12-04 20:49:01,913 DEBUG [RS:1;asf910:51486] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2018-12-04 20:49:01,914 INFO [RS:0;asf910:34504] regionserver.HRegionServer(1531): Serving as asf910.gq1.ygridcore.net,34504,1543956539068, RpcServer on asf910.gq1.ygridcore.net/67.195.81.154:34504, sessionid=0x1677afb1afa0001 2018-12-04 20:49:01,914 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-12-04 20:49:01,914 DEBUG [RS:0;asf910:34504] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,914 DEBUG [RS:2;asf910:36011] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2018-12-04 20:49:01,914 DEBUG [RS:0;asf910:34504] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf910.gq1.ygridcore.net,34504,1543956539068' 2018-12-04 20:49:01,914 DEBUG [RS:0;asf910:34504] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2018-12-04 20:49:01,914 DEBUG [RS:1;asf910:51486] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2018-12-04 20:49:01,915 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-12-04 20:49:01,915 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-12-04 20:49:01,915 DEBUG [RS:2;asf910:36011] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:01,915 DEBUG [RS:2;asf910:36011] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf910.gq1.ygridcore.net,36011,1543956539302' 2018-12-04 20:49:01,915 DEBUG [RS:0;asf910:34504] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2018-12-04 20:49:01,915 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-12-04 20:49:01,915 DEBUG [RS:2;asf910:36011] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2018-12-04 20:49:01,916 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-12-04 20:49:01,916 DEBUG [RS:1;asf910:51486] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:01,916 DEBUG [RS:1;asf910:51486] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf910.gq1.ygridcore.net,51486,1543956539203' 2018-12-04 20:49:01,916 DEBUG [RS:1;asf910:51486] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2018-12-04 20:49:01,916 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-12-04 20:49:01,916 DEBUG [RS:2;asf910:36011] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2018-12-04 20:49:01,916 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-12-04 20:49:01,917 DEBUG [RS:1;asf910:51486] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2018-12-04 20:49:01,917 DEBUG [RS:0;asf910:34504] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:01,917 DEBUG [RS:2;asf910:36011] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-12-04 20:49:01,917 DEBUG [RS:0;asf910:34504] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'asf910.gq1.ygridcore.net,34504,1543956539068' 2018-12-04 20:49:01,917 INFO [RS:2;asf910:36011] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-12-04 20:49:01,917 DEBUG [RS:0;asf910:34504] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2018-12-04 20:49:01,917 DEBUG [RS:1;asf910:51486] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-12-04 20:49:01,917 INFO [RS:2;asf910:36011] quotas.RegionServerSpaceQuotaManager(74): Quota support disabled, not starting space quota manager. 2018-12-04 20:49:01,918 INFO [RS:1;asf910:51486] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-12-04 20:49:01,918 INFO [RS:1;asf910:51486] quotas.RegionServerSpaceQuotaManager(74): Quota support disabled, not starting space quota manager. 2018-12-04 20:49:01,918 DEBUG [RS:0;asf910:34504] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2018-12-04 20:49:01,919 DEBUG [RS:0;asf910:34504] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-12-04 20:49:01,919 INFO [RS:0;asf910:34504] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-12-04 20:49:01,919 INFO [RS:0;asf910:34504] quotas.RegionServerSpaceQuotaManager(74): Quota support disabled, not starting space quota manager. 2018-12-04 20:49:02,055 INFO [RS:2;asf910:36011] wal.AbstractFSWAL(414): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf910.gq1.ygridcore.net%2C36011%2C1543956539302, suffix=, logDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,36011,1543956539302, archiveDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:49:02,055 INFO [RS:1;asf910:51486] wal.AbstractFSWAL(414): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf910.gq1.ygridcore.net%2C51486%2C1543956539203, suffix=, logDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,51486,1543956539203, archiveDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:49:02,055 INFO [RS:0;asf910:34504] wal.AbstractFSWAL(414): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf910.gq1.ygridcore.net%2C34504%2C1543956539068, suffix=, logDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,34504,1543956539068, archiveDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:49:02,112 DEBUG [RS-EventLoopGroup-5-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK] 2018-12-04 20:49:02,120 DEBUG [RS-EventLoopGroup-5-8] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:54375,DS-1db60017-9ad1-4de0-aa53-b88332f13b9e,DISK] 2018-12-04 20:49:02,121 DEBUG [RS-EventLoopGroup-5-6] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK] 2018-12-04 20:49:02,127 DEBUG [RS-EventLoopGroup-5-11] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK] 2018-12-04 20:49:02,128 DEBUG [RS-EventLoopGroup-5-12] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:54375,DS-1db60017-9ad1-4de0-aa53-b88332f13b9e,DISK] 2018-12-04 20:49:02,128 DEBUG [RS-EventLoopGroup-5-10] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK] 2018-12-04 20:49:02,132 DEBUG [RS-EventLoopGroup-5-7] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK] 2018-12-04 20:49:02,134 DEBUG [RS-EventLoopGroup-5-9] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK] 2018-12-04 20:49:02,137 DEBUG [RS-EventLoopGroup-5-13] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK] 2018-12-04 20:49:02,248 INFO [RS:1;asf910:51486] wal.AbstractFSWAL(672): New WAL /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,51486,1543956539203/asf910.gq1.ygridcore.net%2C51486%2C1543956539203.1543956542083 2018-12-04 20:49:02,248 INFO [RS:2;asf910:36011] wal.AbstractFSWAL(672): New WAL /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,36011,1543956539302/asf910.gq1.ygridcore.net%2C36011%2C1543956539302.1543956542083 2018-12-04 20:49:02,248 INFO [RS:0;asf910:34504] wal.AbstractFSWAL(672): New WAL /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,34504,1543956539068/asf910.gq1.ygridcore.net%2C34504%2C1543956539068.1543956542083 2018-12-04 20:49:02,249 DEBUG [RS:1;asf910:51486] wal.AbstractFSWAL(762): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK], DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK], DatanodeInfoWithStorage[127.0.0.1:54375,DS-1db60017-9ad1-4de0-aa53-b88332f13b9e,DISK]] 2018-12-04 20:49:02,249 DEBUG [RS:2;asf910:36011] wal.AbstractFSWAL(762): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK], DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK], DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK]] 2018-12-04 20:49:02,249 DEBUG [RS:0;asf910:34504] wal.AbstractFSWAL(762): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK], DatanodeInfoWithStorage[127.0.0.1:54375,DS-1db60017-9ad1-4de0-aa53-b88332f13b9e,DISK], DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK]] 2018-12-04 20:49:02,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add MetaQueue(hbase:meta, xlock=false sharedLock=0 size=1) to run queue because: pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 has lock 2018-12-04 20:49:02,282 DEBUG [PEWorker-13] procedure.MasterProcedureScheduler(366): Remove MetaQueue(hbase:meta, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 2018-12-04 20:49:02,284 INFO [PEWorker-13] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:02,324 DEBUG [PEWorker-13] zookeeper.MetaTableLocator(466): META region location doesn't exist, create it 2018-12-04 20:49:02,358 INFO [PEWorker-13] assignment.RegionTransitionProcedure(267): Dispatch pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 2018-12-04 20:49:02,358 DEBUG [PEWorker-13] procedure2.RootProcedureState(153): Add procedure pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 as the 3th rollback step 2018-12-04 20:49:02,523 DEBUG [RSProcedureDispatcher-pool3-t1] master.ServerManager(728): New admin connection to asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:02,540 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:51558, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-12-04 20:49:02,576 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=51486] regionserver.RSRpcServices(1987): Open hbase:meta,,1.1588230740 2018-12-04 20:49:02,581 INFO [RS_OPEN_META-regionserver/asf910:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-12-04 20:49:02,588 INFO [RS_OPEN_META-regionserver/asf910:0-0] wal.AbstractFSWAL(414): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=asf910.gq1.ygridcore.net%2C51486%2C1543956539203.meta, suffix=.meta, logDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,51486,1543956539203, archiveDir=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:49:02,615 DEBUG [RS-EventLoopGroup-5-24] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:60454,DS-13eb77f1-f887-4435-855a-29c30e684eaa,DISK] 2018-12-04 20:49:02,623 DEBUG [RS-EventLoopGroup-5-25] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK] 2018-12-04 20:49:02,626 DEBUG [RS-EventLoopGroup-5-26] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33680,DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d,DISK] 2018-12-04 20:49:02,634 INFO [RS_OPEN_META-regionserver/asf910:0-0] wal.AbstractFSWAL(672): New WAL /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/WALs/asf910.gq1.ygridcore.net,51486,1543956539203/asf910.gq1.ygridcore.net%2C51486%2C1543956539203.meta.1543956542592.meta 2018-12-04 20:49:02,635 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] wal.AbstractFSWAL(762): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:60454,DS-13eb77f1-f887-4435-855a-29c30e684eaa,DISK], DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK], DatanodeInfoWithStorage[127.0.0.1:33680,DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d,DISK]] 2018-12-04 20:49:02,636 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2018-12-04 20:49:02,666 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] coprocessor.CoprocessorHost(200): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2018-12-04 20:49:02,687 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.HRegion(8154): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2018-12-04 20:49:02,697 INFO [RS_OPEN_META-regionserver/asf910:0-0] regionserver.RegionCoprocessorHost(394): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2018-12-04 20:49:02,703 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table meta 1588230740 2018-12-04 20:49:02,704 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:02,717 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/info 2018-12-04 20:49:02,717 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/info 2018-12-04 20:49:02,718 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(237): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:02,719 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:02,722 INFO [StoreOpener-1588230740-1] regionserver.HStore(332): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:02,725 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/rep_barrier 2018-12-04 20:49:02,726 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/rep_barrier 2018-12-04 20:49:02,726 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(237): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:02,727 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:02,728 INFO [StoreOpener-1588230740-1] regionserver.HStore(332): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:02,732 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/table 2018-12-04 20:49:02,732 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/table 2018-12-04 20:49:02,733 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(237): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:02,733 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:02,734 INFO [StoreOpener-1588230740-1] regionserver.HStore(332): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:02,737 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740 2018-12-04 20:49:02,742 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase/meta/1588230740 2018-12-04 20:49:02,745 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-12-04 20:49:02,746 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for 1588230740 2018-12-04 20:49:02,750 INFO [RS_OPEN_META-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened 1588230740; next sequenceid=2 2018-12-04 20:49:02,789 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2177): Post open deploy tasks for hbase:meta,,1.1588230740 2018-12-04 20:49:02,809 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:02,810 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add MetaQueue(hbase:meta, xlock=false sharedLock=0 size=1) to run queue because: pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 has lock 2018-12-04 20:49:02,810 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove MetaQueue(hbase:meta, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=hbase:meta, region=1588230740 2018-12-04 20:49:02,812 DEBUG [PEWorker-14] assignment.RegionTransitionProcedure(387): Finishing pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:02,813 INFO [PEWorker-14] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:02,814 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2201): Finished post open deploy task for hbase:meta,,1.1588230740 2018-12-04 20:49:02,817 DEBUG [RS_OPEN_META-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:02,839 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:02,906 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2018-12-04 20:49:02,906 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=2, ppid=1, state=SUCCESS, locked=true; AssignProcedure table=hbase:meta, region=1588230740 as the 4th rollback step 2018-12-04 20:49:02,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=0, pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=hbase:meta, region=1588230740; rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,013 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,013 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,118 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,118 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,224 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,224 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,330 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,330 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,343 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 433ms (threshold=300ms), isSync:true, flushTotalNanos=9490ns 2018-12-04 20:49:03,343 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 433ms (threshold=300ms), isSync:true, flushTotalNanos=8457ns 2018-12-04 20:49:03,359 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 448ms (threshold=300ms), isSync:true, flushTotalNanos=8178ns 2018-12-04 20:49:03,439 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,439 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,545 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,545 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,649 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,649 WARN [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,716 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 350ms (threshold=300ms), isSync:true, flushTotalNanos=7224ns 2018-12-04 20:49:03,718 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 352ms (threshold=300ms), isSync:true, flushTotalNanos=6882ns 2018-12-04 20:49:03,725 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 359ms (threshold=300ms), isSync:true, flushTotalNanos=11738ns 2018-12-04 20:49:03,728 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:meta, xlock=false sharedLock=0 size=0) to run queue because: pid=2, ppid=1, state=SUCCESS; AssignProcedure table=hbase:meta, region=1588230740 released the shared lock 2018-12-04 20:49:03,753 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,753 WARN [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,856 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,856 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,887 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(356): Add MetaQueue(hbase:meta, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=1, state=RUNNABLE; InitMetaProcedure table=hbase:meta 2018-12-04 20:49:03,887 INFO [PEWorker-14] procedure2.ProcedureExecutor(1897): Finished subprocedure pid=2, resume processing parent pid=1, state=RUNNABLE; InitMetaProcedure table=hbase:meta 2018-12-04 20:49:03,887 INFO [PEWorker-14] procedure2.ProcedureExecutor(1485): Finished pid=2, ppid=1, state=SUCCESS; AssignProcedure table=hbase:meta, region=1588230740 in 1.5390sec, unfinishedSiblingCount=0 2018-12-04 20:49:03,888 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove MetaQueue(hbase:meta, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=1, state=RUNNABLE; InitMetaProcedure table=hbase:meta 2018-12-04 20:49:03,888 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:meta, xlock=true (1) sharedLock=0 size=0) from run queue because: pid=1, state=RUNNABLE; InitMetaProcedure table=hbase:meta held the exclusive lock 2018-12-04 20:49:03,959 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:03,959 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:03,986 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=1, state=SUCCESS, locked=true; InitMetaProcedure table=hbase:meta as the 5th rollback step 2018-12-04 20:49:04,062 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:04,062 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:04,165 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:04,165 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:04,243 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:meta, xlock=false sharedLock=0 size=0) to run queue because: pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta released the exclusive lock 2018-12-04 20:49:04,243 INFO [PEWorker-14] procedure2.ProcedureExecutor(1485): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 3.1070sec 2018-12-04 20:49:04,244 INFO [master/asf910:0:becomeActiveMaster] master.HMaster(981): Master startup: status=Wait for region servers to report in, state=RUNNING, startTime=1543956539437, completionTime=-1 2018-12-04 20:49:04,244 INFO [master/asf910:0:becomeActiveMaster] master.ServerManager(839): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2018-12-04 20:49:04,244 DEBUG [master/asf910:0:becomeActiveMaster] assignment.AssignmentManager(1201): Joining cluster... 2018-12-04 20:49:04,268 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:04,268 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:04,325 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:51638, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:04,371 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(993): META REPORTED: rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203, table=hbase:meta, region=1588230740 2018-12-04 20:49:04,371 WARN [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.AssignmentManager(995): META REPORTED but no procedure found (complete?); set location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:04,385 INFO [master/asf910:0:becomeActiveMaster] assignment.AssignmentManager(1213): Number of RegionServers=3 2018-12-04 20:49:04,386 INFO [master/asf910:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1543956604385 2018-12-04 20:49:04,386 INFO [master/asf910:0:becomeActiveMaster] assignment.AssignmentManager(1219): Joined the cluster in 141msec 2018-12-04 20:49:04,544 INFO [master/asf910:0:becomeActiveMaster] master.TableNamespaceManager(96): Namespace table not found. Creating... 2018-12-04 20:49:04,551 INFO [master/asf910:0:becomeActiveMaster] master.HMaster(2022): Client=null/null create 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} 2018-12-04 20:49:04,738 DEBUG [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(1092): Stored pid=3, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2018-12-04 20:49:04,738 DEBUG [master/asf910:0:becomeActiveMaster] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=3, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2018-12-04 20:49:04,740 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=3, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2018-12-04 20:49:04,742 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=true (3) sharedLock=0 size=0) from run queue because: pid=3, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace held the exclusive lock 2018-12-04 20:49:04,859 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=3, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace as the 0th rollback step 2018-12-04 20:49:04,923 DEBUG [PEWorker-15] procedure.DeleteTableProcedure(313): Archiving region hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. from FS 2018-12-04 20:49:04,926 DEBUG [PEWorker-15] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:04,928 DEBUG [PEWorker-15] backup.HFileArchiver(146): Directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b empty. 2018-12-04 20:49:04,929 DEBUG [PEWorker-15] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b 2018-12-04 20:49:04,930 DEBUG [PEWorker-15] procedure.DeleteTableProcedure(317): Table 'hbase:namespace' archived! 2018-12-04 20:49:05,008 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741834_1010{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW]]} size 476 2018-12-04 20:49:05,009 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741834_1010 size 476 2018-12-04 20:49:05,009 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741834_1010 size 476 2018-12-04 20:49:05,415 DEBUG [PEWorker-15] util.FSTableDescriptors(684): Wrote into hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2018-12-04 20:49:05,420 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(7003): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp Table name == hbase:namespace 2018-12-04 20:49:05,567 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:49:05,568 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:49:05,568 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|FINALIZED], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|FINALIZED]]} size 0 2018-12-04 20:49:05,572 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(833): Instantiated hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:05,574 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1541): Closing 9ec9c1da4947b53085aaed5a2a3da06b, disabling compactions & flushes 2018-12-04 20:49:05,574 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1581): Updates disabled for region hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:49:05,574 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1698): Closed hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:49:05,580 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=3, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace as the 1th rollback step 2018-12-04 20:49:05,742 DEBUG [PEWorker-15] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":1543956545680},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1543956545680}]},"ts":1543956545680} 2018-12-04 20:49:05,793 INFO [PEWorker-15] hbase.MetaTableAccessor(1528): Added 1 regions to meta. 2018-12-04 20:49:05,794 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=3, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace as the 2th rollback step 2018-12-04 20:49:05,892 DEBUG [PEWorker-15] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956545885}]},"ts":1543956545885} 2018-12-04 20:49:05,900 INFO [PEWorker-15] hbase.MetaTableAccessor(1673): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2018-12-04 20:49:06,006 INFO [PEWorker-15] procedure2.ProcedureExecutor(1758): Initialized subprocedures=[{pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068}] 2018-12-04 20:49:06,007 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=3, state=WAITING:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace as the 3th rollback step 2018-12-04 20:49:06,171 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=true (3) sharedLock=0 size=1) to run queue because: pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 has the excusive lock access 2018-12-04 20:49:06,171 DEBUG [PEWorker-16] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=true (3) sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,173 INFO [PEWorker-16] procedure.MasterProcedureScheduler(741): Took xlock for pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,388 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=1 size=0) to run queue because: pid=3, state=WAITING:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=hbase:namespace released the exclusive lock 2018-12-04 20:49:06,388 INFO [PEWorker-16] assignment.AssignProcedure(254): Starting pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OFFLINE, location=asf910.gq1.ygridcore.net,34504,1543956539068; forceNewPlan=false, retain=false 2018-12-04 20:49:06,389 DEBUG [PEWorker-16] procedure2.RootProcedureState(153): Add procedure pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 4th rollback step 2018-12-04 20:49:06,541 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-12-04 20:49:06,543 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=1 size=1) to run queue because: pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:06,543 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=false sharedLock=1 size=0) from run queue because: queue is empty after polling out pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,593 INFO [PEWorker-8] assignment.RegionStateStore(200): pid=4 updating hbase:meta row=9ec9c1da4947b53085aaed5a2a3da06b, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,600 INFO [PEWorker-8] assignment.RegionTransitionProcedure(267): Dispatch pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,600 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 5th rollback step 2018-12-04 20:49:06,752 DEBUG [RSProcedureDispatcher-pool3-t3] master.ServerManager(728): New admin connection to asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,752 DEBUG [RSProcedureDispatcher-pool3-t4] master.ServerManager(728): New admin connection to asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,762 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:53664, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-12-04 20:49:06,763 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1987): Open hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:49:06,773 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => 9ec9c1da4947b53085aaed5a2a3da06b, NAME => 'hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b.', STARTKEY => '', ENDKEY => ''} 2018-12-04 20:49:06,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table namespace 9ec9c1da4947b53085aaed5a2a3da06b 2018-12-04 20:49:06,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:06,781 DEBUG [StoreOpener-9ec9c1da4947b53085aaed5a2a3da06b-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/info 2018-12-04 20:49:06,781 DEBUG [StoreOpener-9ec9c1da4947b53085aaed5a2a3da06b-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/info 2018-12-04 20:49:06,783 INFO [StoreOpener-9ec9c1da4947b53085aaed5a2a3da06b-1] hfile.CacheConfig(237): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:06,784 INFO [StoreOpener-9ec9c1da4947b53085aaed5a2a3da06b-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:06,785 INFO [StoreOpener-9ec9c1da4947b53085aaed5a2a3da06b-1] regionserver.HStore(332): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:06,789 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b 2018-12-04 20:49:06,791 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b 2018-12-04 20:49:06,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for 9ec9c1da4947b53085aaed5a2a3da06b 2018-12-04 20:49:06,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:49:06,805 INFO [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened 9ec9c1da4947b53085aaed5a2a3da06b; next sequenceid=2 2018-12-04 20:49:06,811 INFO [PostOpenDeployTasks:9ec9c1da4947b53085aaed5a2a3da06b] regionserver.HRegionServer(2177): Post open deploy tasks for hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:49:06,816 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,816 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=1 size=1) to run queue because: pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:06,816 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=false sharedLock=1 size=0) from run queue because: queue is empty after polling out pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,817 DEBUG [PEWorker-2] assignment.RegionTransitionProcedure(387): Finishing pid=4, ppid=3, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,817 DEBUG [PostOpenDeployTasks:9ec9c1da4947b53085aaed5a2a3da06b] regionserver.HRegionServer(2201): Finished post open deploy task for hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:49:06,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. on asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,820 INFO [PEWorker-2] assignment.RegionStateStore(200): pid=4 updating hbase:meta row=9ec9c1da4947b53085aaed5a2a3da06b, regionState=OPEN, openSeqNum=2, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:06,827 DEBUG [PEWorker-2] procedure2.RootProcedureState(153): Add procedure pid=4, ppid=3, state=SUCCESS, locked=true; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 6th rollback step 2018-12-04 20:49:07,126 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=0 size=0) to run queue because: pid=4, ppid=3, state=SUCCESS; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 released the shared lock 2018-12-04 20:49:07,208 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=3, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=hbase:namespace 2018-12-04 20:49:07,208 INFO [PEWorker-2] procedure2.ProcedureExecutor(1897): Finished subprocedure pid=4, resume processing parent pid=3, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=hbase:namespace 2018-12-04 20:49:07,209 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=3, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=hbase:namespace 2018-12-04 20:49:07,209 INFO [PEWorker-2] procedure2.ProcedureExecutor(1485): Finished pid=4, ppid=3, state=SUCCESS; AssignProcedure table=hbase:namespace, region=9ec9c1da4947b53085aaed5a2a3da06b, target=asf910.gq1.ygridcore.net,34504,1543956539068 in 821msec, unfinishedSiblingCount=0 2018-12-04 20:49:07,209 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=true (3) sharedLock=0 size=0) from run queue because: pid=3, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=hbase:namespace held the exclusive lock 2018-12-04 20:49:07,286 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956547286}]},"ts":1543956547286} 2018-12-04 20:49:07,292 INFO [PEWorker-3] hbase.MetaTableAccessor(1673): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2018-12-04 20:49:07,307 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=3, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace as the 7th rollback step 2018-12-04 20:49:07,351 DEBUG [master/asf910:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2018-12-04 20:49:07,352 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(135): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-12-04 20:49:07,353 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(139): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2018-12-04 20:49:07,365 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2018-12-04 20:49:07,375 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=3, state=SUCCESS, locked=true; CreateTableProcedure table=hbase:namespace as the 8th rollback step 2018-12-04 20:49:07,408 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:53688, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:07,601 DEBUG [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(1092): Stored pid=5, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2018-12-04 20:49:07,703 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-12-04 20:49:07,704 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-12-04 20:49:07,766 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=0 size=1) to run queue because: pid=3, state=SUCCESS; CreateTableProcedure table=hbase:namespace released the exclusive lock 2018-12-04 20:49:07,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1485): Finished pid=3, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 2.8220sec 2018-12-04 20:49:07,769 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=5, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2018-12-04 20:49:07,854 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=5, state=RUNNABLE:CREATE_NAMESPACE_CREATE_DIRECTORY, locked=true; CreateNamespaceProcedure, namespace=default as the 0th rollback step 2018-12-04 20:49:08,011 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=5, state=RUNNABLE:CREATE_NAMESPACE_INSERT_INTO_NS_TABLE, locked=true; CreateNamespaceProcedure, namespace=default as the 1th rollback step 2018-12-04 20:49:08,162 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=5, state=RUNNABLE:CREATE_NAMESPACE_UPDATE_ZK, locked=true; CreateNamespaceProcedure, namespace=default as the 2th rollback step 2018-12-04 20:49:08,233 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-12-04 20:49:08,323 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2018-12-04 20:49:08,349 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=5, state=RUNNABLE:CREATE_NAMESPACE_SET_NAMESPACE_QUOTA, locked=true; CreateNamespaceProcedure, namespace=default as the 3th rollback step 2018-12-04 20:49:08,664 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 310ms (threshold=300ms), isSync:true, flushTotalNanos=5882ns 2018-12-04 20:49:08,666 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=5, state=SUCCESS, locked=true; CreateNamespaceProcedure, namespace=default as the 4th rollback step 2018-12-04 20:49:09,407 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 740ms (threshold=300ms), isSync:true, flushTotalNanos=12668ns 2018-12-04 20:49:09,416 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 748ms (threshold=300ms), isSync:true, flushTotalNanos=8456ns 2018-12-04 20:49:09,430 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 763ms (threshold=300ms), isSync:true, flushTotalNanos=9878ns 2018-12-04 20:49:09,830 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 398ms (threshold=300ms), isSync:true, flushTotalNanos=9770ns 2018-12-04 20:49:09,857 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 425ms (threshold=300ms), isSync:true, flushTotalNanos=7632ns 2018-12-04 20:49:09,867 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 434ms (threshold=300ms), isSync:true, flushTotalNanos=13072ns 2018-12-04 20:49:09,868 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=0 size=0) to run queue because: pid=5, state=SUCCESS; CreateNamespaceProcedure, namespace=default released namespace exclusive lock 2018-12-04 20:49:09,869 INFO [PEWorker-3] procedure2.ProcedureExecutor(1485): Finished pid=5, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 1.2530sec 2018-12-04 20:49:10,125 DEBUG [master/asf910:0:becomeActiveMaster] procedure2.ProcedureExecutor(1092): Stored pid=6, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2018-12-04 20:49:10,125 DEBUG [master/asf910:0:becomeActiveMaster] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=6, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2018-12-04 20:49:10,127 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(366): Remove TableQueue(hbase:namespace, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=6, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2018-12-04 20:49:10,285 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=6, state=RUNNABLE:CREATE_NAMESPACE_CREATE_DIRECTORY, locked=true; CreateNamespaceProcedure, namespace=hbase as the 0th rollback step 2018-12-04 20:49:10,490 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=6, state=RUNNABLE:CREATE_NAMESPACE_INSERT_INTO_NS_TABLE, locked=true; CreateNamespaceProcedure, namespace=hbase as the 1th rollback step 2018-12-04 20:49:10,583 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=6, state=RUNNABLE:CREATE_NAMESPACE_UPDATE_ZK, locked=true; CreateNamespaceProcedure, namespace=hbase as the 2th rollback step 2018-12-04 20:49:10,774 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2018-12-04 20:49:10,808 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=6, state=RUNNABLE:CREATE_NAMESPACE_SET_NAMESPACE_QUOTA, locked=true; CreateNamespaceProcedure, namespace=hbase as the 3th rollback step 2018-12-04 20:49:10,942 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=6, state=SUCCESS, locked=true; CreateNamespaceProcedure, namespace=hbase as the 4th rollback step 2018-12-04 20:49:11,267 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(356): Add TableQueue(hbase:namespace, xlock=false sharedLock=0 size=0) to run queue because: pid=6, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase released namespace exclusive lock 2018-12-04 20:49:11,268 INFO [PEWorker-4] procedure2.ProcedureExecutor(1485): Finished pid=6, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 1.0670sec 2018-12-04 20:49:11,339 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2018-12-04 20:49:11,365 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2018-12-04 20:49:11,366 INFO [master/asf910:0:becomeActiveMaster] master.HMaster(1049): Master has completed initialization 11.855sec 2018-12-04 20:49:11,373 INFO [master/asf910:0:becomeActiveMaster] quotas.MasterQuotaManager(80): Quota support disabled 2018-12-04 20:49:11,373 INFO [master/asf910:0:becomeActiveMaster] zookeeper.ZKWatcher(205): not a secure deployment, proceeding 2018-12-04 20:49:11,383 INFO [master/asf910:0:becomeActiveMaster] balancer.RegionLocationFinder(308): Refreshing block distribution cache for 2 regions (Can take a while on big cluster) 2018-12-04 20:49:11,398 INFO [master/asf910:0:becomeActiveMaster] balancer.RegionLocationFinder(324): Finished refreshing block distribution cache for 2 regions 2018-12-04 20:49:11,398 DEBUG [master/asf910:0:becomeActiveMaster] master.HMaster(1113): Balancer post startup initialization complete, took 0 seconds 2018-12-04 20:49:11,443 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x48feb89f to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:11,482 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12eae18c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:11,527 INFO [RS-EventLoopGroup-4-4] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:51830, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:11,544 INFO [Time-limited test] hbase.HBaseTestingUtility(1052): Minicluster is up; activeMaster=asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:49:11,634 INFO [Time-limited test] hbase.ResourceChecker(148): before: client.TestRestoreSnapshotFromClientAfterSplittingRegions#testRestoreSnapshotAfterSplittingRegions[0: regionReplication=1] Thread=392, OpenFileDescriptor=1593, MaxFileDescriptor=60000, SystemLoadAverage=1008, ProcessCount=302, AvailableMemoryMB=13040 2018-12-04 20:49:11,635 WARN [Time-limited test] hbase.ResourceChecker(135): OpenFileDescriptor=1593 is superior to 1024 2018-12-04 20:49:11,663 INFO [RS-EventLoopGroup-1-5] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:51823, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2018-12-04 20:49:11,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.HMaster$4(1986): Client=jenkins//67.195.81.154 create 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-12-04 20:49:11,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:11,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:11,903 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:11,905 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=0) from run queue because: pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:12,045 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 0th rollback step 2018-12-04 20:49:12,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.MasterRpcServices(631): Client=jenkins//67.195.81.154 procedure request for creating table: namespace: "default" qualifier: "testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635" procId is: 7 2018-12-04 20:49:12,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=7 2018-12-04 20:49:12,241 DEBUG [PEWorker-7] procedure.DeleteTableProcedure(313): Archiving region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. from FS 2018-12-04 20:49:12,242 DEBUG [PEWorker-7] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:12,243 DEBUG [PEWorker-7] backup.HFileArchiver(146): Directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f empty. 2018-12-04 20:49:12,246 DEBUG [PEWorker-7] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:12,246 DEBUG [PEWorker-7] procedure.DeleteTableProcedure(313): Archiving region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. from FS 2018-12-04 20:49:12,246 DEBUG [PEWorker-7] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:12,248 DEBUG [PEWorker-7] backup.HFileArchiver(146): Directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d empty. 2018-12-04 20:49:12,250 DEBUG [PEWorker-7] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:12,250 DEBUG [PEWorker-7] procedure.DeleteTableProcedure(313): Archiving region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. from FS 2018-12-04 20:49:12,250 DEBUG [PEWorker-7] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:12,252 DEBUG [PEWorker-7] backup.HFileArchiver(146): Directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb empty. 2018-12-04 20:49:12,254 DEBUG [PEWorker-7] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:12,254 DEBUG [PEWorker-7] procedure.DeleteTableProcedure(313): Archiving region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. from FS 2018-12-04 20:49:12,254 DEBUG [PEWorker-7] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:12,256 DEBUG [PEWorker-7] backup.HFileArchiver(146): Directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde empty. 2018-12-04 20:49:12,257 DEBUG [PEWorker-7] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:12,257 DEBUG [PEWorker-7] procedure.DeleteTableProcedure(313): Archiving region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. from FS 2018-12-04 20:49:12,257 DEBUG [PEWorker-7] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:12,259 DEBUG [PEWorker-7] backup.HFileArchiver(146): Directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 empty. 2018-12-04 20:49:12,264 DEBUG [PEWorker-7] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:12,264 DEBUG [PEWorker-7] procedure.DeleteTableProcedure(313): Archiving region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. from FS 2018-12-04 20:49:12,264 DEBUG [PEWorker-7] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960 2018-12-04 20:49:12,266 DEBUG [PEWorker-7] backup.HFileArchiver(146): Directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 empty. 2018-12-04 20:49:12,267 DEBUG [PEWorker-7] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:12,267 DEBUG [PEWorker-7] procedure.DeleteTableProcedure(317): Table 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635' archived! 2018-12-04 20:49:12,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=7 2018-12-04 20:49:12,336 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:12,336 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:12,337 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:49:12,343 DEBUG [PEWorker-7] util.FSTableDescriptors(684): Wrote into hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/.tabledesc/.tableinfo.0000000001 2018-12-04 20:49:12,347 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-2] regionserver.HRegion(7003): creating HRegion testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 HTD == 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp Table name == testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:12,348 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-6] regionserver.HRegion(7003): creating HRegion testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 HTD == 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp Table name == testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:12,348 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-3] regionserver.HRegion(7003): creating HRegion testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 HTD == 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp Table name == testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:12,353 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-4] regionserver.HRegion(7003): creating HRegion testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 HTD == 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp Table name == testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:12,356 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-5] regionserver.HRegion(7003): creating HRegion testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 HTD == 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp Table name == testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:12,357 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-1] regionserver.HRegion(7003): creating HRegion testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 HTD == 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.tmp Table name == testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:12,502 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:49:12,506 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:49:12,514 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:12,514 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:12,514 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741839_1015 size 114 2018-12-04 20:49:12,515 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-6] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:12,517 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-6] regionserver.HRegion(1541): Closing 3694f6258e9e47dea826bcb208d58324, disabling compactions & flushes 2018-12-04 20:49:12,515 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:12,518 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-6] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:12,519 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-6] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:12,522 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:12,523 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:12,524 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 114 2018-12-04 20:49:12,524 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741841_1017 size 115 2018-12-04 20:49:12,524 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:12,524 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741838_1014 size 114 2018-12-04 20:49:12,525 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741838_1014 size 114 2018-12-04 20:49:12,525 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:12,525 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-5] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:12,526 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-5] regionserver.HRegion(1541): Closing 0cbbdc66f0b53e014d4b09cb9f965d90, disabling compactions & flushes 2018-12-04 20:49:12,526 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-5] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:12,526 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:12,527 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-5] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:12,529 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-3] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:12,529 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-2] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:12,529 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741842_1018 size 115 2018-12-04 20:49:12,531 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-2] regionserver.HRegion(1541): Closing 17bf706db6019b3980612acaaf29410d, disabling compactions & flushes 2018-12-04 20:49:12,531 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-2] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:12,531 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-4] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:12,530 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-3] regionserver.HRegion(1541): Closing eea7db479f05d0bfd00980b44810efbb, disabling compactions & flushes 2018-12-04 20:49:12,531 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-4] regionserver.HRegion(1541): Closing f54fb87a834cb50fd2027cf50bec8dde, disabling compactions & flushes 2018-12-04 20:49:12,531 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-2] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:12,531 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741837_1013 size 115 2018-12-04 20:49:12,532 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-4] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:12,531 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-3] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:12,532 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-4] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:12,532 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741842_1018 size 115 2018-12-04 20:49:12,532 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-3] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:12,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=7 2018-12-04 20:49:12,926 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-1] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:12,926 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-1] regionserver.HRegion(1541): Closing 5abac36fc00b7260425322877c1d024f, disabling compactions & flushes 2018-12-04 20:49:12,926 DEBUG [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-1] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:12,927 INFO [RegionOpenAndInitThread-testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635-1] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:12,938 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 1th rollback step 2018-12-04 20:49:13,092 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.","families":{"info":[{"qualifier":"regioninfo","vlen":113,"tag":[],"timestamp":1543956553090},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1543956553090}]},"ts":1543956553090} 2018-12-04 20:49:13,093 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.","families":{"info":[{"qualifier":"regioninfo","vlen":114,"tag":[],"timestamp":1543956553090},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1543956553090}]},"ts":1543956553090} 2018-12-04 20:49:13,093 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.","families":{"info":[{"qualifier":"regioninfo","vlen":114,"tag":[],"timestamp":1543956553090},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1543956553090}]},"ts":1543956553090} 2018-12-04 20:49:13,093 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.","families":{"info":[{"qualifier":"regioninfo","vlen":114,"tag":[],"timestamp":1543956553090},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1543956553090}]},"ts":1543956553090} 2018-12-04 20:49:13,094 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.","families":{"info":[{"qualifier":"regioninfo","vlen":114,"tag":[],"timestamp":1543956553090},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1543956553090}]},"ts":1543956553090} 2018-12-04 20:49:13,094 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":2,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.","families":{"info":[{"qualifier":"regioninfo","vlen":113,"tag":[],"timestamp":1543956553090},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1543956553090}]},"ts":1543956553090} 2018-12-04 20:49:13,161 INFO [PEWorker-7] hbase.MetaTableAccessor(1528): Added 6 regions to meta. 2018-12-04 20:49:13,161 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 2th rollback step 2018-12-04 20:49:13,317 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956553316}]},"ts":1543956553316} 2018-12-04 20:49:13,322 INFO [PEWorker-7] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=ENABLING in hbase:meta 2018-12-04 20:49:13,426 INFO [PEWorker-7] procedure2.ProcedureExecutor(1758): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203}, {pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203}, {pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302}, {pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068}, {pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302}, {pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068}] 2018-12-04 20:49:13,426 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=7, state=WAITING:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 3th rollback step 2018-12-04 20:49:13,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=7 2018-12-04 20:49:13,597 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=1) to run queue because: pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 has the excusive lock access 2018-12-04 20:49:13,597 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=2) to run queue because: pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 has the excusive lock access 2018-12-04 20:49:13,598 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=3) to run queue because: pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 has the excusive lock access 2018-12-04 20:49:13,598 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=4) to run queue because: pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 has the excusive lock access 2018-12-04 20:49:13,598 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=5) to run queue because: pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 has the excusive lock access 2018-12-04 20:49:13,599 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=6) to run queue because: pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 has the excusive lock access 2018-12-04 20:49:13,601 INFO [PEWorker-6] procedure.MasterProcedureScheduler(741): Took xlock for pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:13,604 INFO [PEWorker-9] procedure.MasterProcedureScheduler(741): Took xlock for pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:13,605 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=2 size=0) from run queue because: queue is empty after polling out pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:13,605 INFO [PEWorker-12] procedure.MasterProcedureScheduler(741): Took xlock for pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:13,606 INFO [PEWorker-10] procedure.MasterProcedureScheduler(741): Took xlock for pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:13,607 INFO [PEWorker-11] procedure.MasterProcedureScheduler(741): Took xlock for pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:13,607 INFO [PEWorker-1] procedure.MasterProcedureScheduler(741): Took xlock for pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:13,870 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) to run queue because: pid=7, state=WAITING:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:13,870 INFO [PEWorker-9] assignment.AssignProcedure(254): Starting pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203; rit=OFFLINE, location=asf910.gq1.ygridcore.net,51486,1543956539203; forceNewPlan=false, retain=false 2018-12-04 20:49:13,870 INFO [PEWorker-6] assignment.AssignProcedure(254): Starting pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OFFLINE, location=asf910.gq1.ygridcore.net,34504,1543956539068; forceNewPlan=false, retain=false 2018-12-04 20:49:13,870 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 as the 4th rollback step 2018-12-04 20:49:13,870 DEBUG [PEWorker-6] procedure2.RootProcedureState(153): Add procedure pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 5th rollback step 2018-12-04 20:49:14,021 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 2 regions. 2 retained the pre-restart assignment. 2018-12-04 20:49:14,022 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:14,023 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:14,024 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,067 INFO [PEWorker-10] assignment.AssignProcedure(254): Starting pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OFFLINE, location=asf910.gq1.ygridcore.net,36011,1543956539302; forceNewPlan=false, retain=false 2018-12-04 20:49:14,067 INFO [PEWorker-12] assignment.AssignProcedure(254): Starting pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203; rit=OFFLINE, location=asf910.gq1.ygridcore.net,51486,1543956539203; forceNewPlan=false, retain=false 2018-12-04 20:49:14,067 INFO [PEWorker-11] assignment.AssignProcedure(254): Starting pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OFFLINE, location=asf910.gq1.ygridcore.net,34504,1543956539068; forceNewPlan=false, retain=false 2018-12-04 20:49:14,069 INFO [PEWorker-13] assignment.RegionStateStore(200): pid=9 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,069 INFO [PEWorker-5] assignment.RegionStateStore(200): pid=13 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,069 DEBUG [PEWorker-10] procedure2.RootProcedureState(153): Add procedure pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 as the 6th rollback step 2018-12-04 20:49:14,070 DEBUG [PEWorker-11] procedure2.RootProcedureState(153): Add procedure pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 7th rollback step 2018-12-04 20:49:14,070 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 as the 8th rollback step 2018-12-04 20:49:14,077 INFO [PEWorker-13] assignment.RegionTransitionProcedure(267): Dispatch pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,077 DEBUG [PEWorker-13] procedure2.RootProcedureState(153): Add procedure pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 as the 9th rollback step 2018-12-04 20:49:14,113 INFO [PEWorker-5] assignment.RegionTransitionProcedure(267): Dispatch pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,113 DEBUG [PEWorker-5] procedure2.RootProcedureState(153): Add procedure pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 10th rollback step 2018-12-04 20:49:14,218 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 3 regions. 3 retained the pre-restart assignment. 2018-12-04 20:49:14,220 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:14,221 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:14,222 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=3) to run queue because: pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:14,223 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,229 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=51486] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:14,245 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => 17bf706db6019b3980612acaaf29410d, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.', STARTKEY => '1', ENDKEY => '2'} 2018-12-04 20:49:14,247 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:14,247 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:14,253 DEBUG [StoreOpener-17bf706db6019b3980612acaaf29410d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf 2018-12-04 20:49:14,253 DEBUG [StoreOpener-17bf706db6019b3980612acaaf29410d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf 2018-12-04 20:49:14,260 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:14,261 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:14,262 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:14,265 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:14,266 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:14,267 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:14,271 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for 17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:14,277 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:49:14,277 INFO [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened 17bf706db6019b3980612acaaf29410d; next sequenceid=2 2018-12-04 20:49:14,282 INFO [PostOpenDeployTasks:17bf706db6019b3980612acaaf29410d] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:14,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:14,285 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,285 DEBUG [PostOpenDeployTasks:17bf706db6019b3980612acaaf29410d] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:14,287 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. on asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,297 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => 3694f6258e9e47dea826bcb208d58324, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.', STARTKEY => '5', ENDKEY => ''} 2018-12-04 20:49:14,298 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:14,298 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:14,305 DEBUG [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf 2018-12-04 20:49:14,305 DEBUG [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf 2018-12-04 20:49:14,307 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:14,308 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:14,311 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:14,315 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:14,316 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:14,320 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for 3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:14,325 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:49:14,325 INFO [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened 3694f6258e9e47dea826bcb208d58324; next sequenceid=2 2018-12-04 20:49:14,328 INFO [PEWorker-1] assignment.AssignProcedure(254): Starting pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OFFLINE, location=asf910.gq1.ygridcore.net,36011,1543956539302; forceNewPlan=false, retain=false 2018-12-04 20:49:14,329 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 as the 11th rollback step 2018-12-04 20:49:14,329 DEBUG [PEWorker-2] assignment.RegionTransitionProcedure(387): Finishing pid=9, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,329 INFO [PEWorker-8] assignment.RegionStateStore(200): pid=11 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,329 INFO [PEWorker-16] assignment.RegionStateStore(200): pid=12 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,329 INFO [PEWorker-15] assignment.RegionStateStore(200): pid=8 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,330 INFO [PEWorker-2] assignment.RegionStateStore(200): pid=9 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=OPEN, openSeqNum=2, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,359 INFO [PostOpenDeployTasks:3694f6258e9e47dea826bcb208d58324] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:14,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,362 INFO [PEWorker-8] assignment.RegionTransitionProcedure(267): Dispatch pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,362 DEBUG [PEWorker-2] procedure2.RootProcedureState(153): Add procedure pid=9, ppid=7, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 as the 12th rollback step 2018-12-04 20:49:14,362 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 13th rollback step 2018-12-04 20:49:14,362 INFO [PEWorker-16] assignment.RegionTransitionProcedure(267): Dispatch pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,363 DEBUG [PEWorker-16] procedure2.RootProcedureState(153): Add procedure pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 as the 14th rollback step 2018-12-04 20:49:14,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:14,363 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,364 DEBUG [PEWorker-3] assignment.RegionTransitionProcedure(387): Finishing pid=13, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,364 DEBUG [PostOpenDeployTasks:3694f6258e9e47dea826bcb208d58324] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:14,363 INFO [PEWorker-15] assignment.RegionTransitionProcedure(267): Dispatch pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,365 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. on asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,365 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 as the 15th rollback step 2018-12-04 20:49:14,365 INFO [PEWorker-3] assignment.RegionStateStore(200): pid=13 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=OPEN, openSeqNum=2, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,370 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=13, ppid=7, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 16th rollback step 2018-12-04 20:49:14,479 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-12-04 20:49:14,481 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:14,481 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,481 INFO [PEWorker-4] assignment.RegionStateStore(200): pid=10 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,485 INFO [PEWorker-4] assignment.RegionTransitionProcedure(267): Dispatch pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,486 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 as the 17th rollback step 2018-12-04 20:49:14,514 DEBUG [RSProcedureDispatcher-pool3-t10] master.ServerManager(728): New admin connection to asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,514 DEBUG [RSProcedureDispatcher-pool3-t12] master.ServerManager(728): New admin connection to asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,516 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=34504] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:14,531 INFO [RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=51486] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:14,531 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:59780, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2018-12-04 20:49:14,532 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:14,541 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(7177): Opening region: {ENCODED => 5abac36fc00b7260425322877c1d024f, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.', STARTKEY => '', ENDKEY => '1'} 2018-12-04 20:49:14,547 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:14,547 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:14,549 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => 0cbbdc66f0b53e014d4b09cb9f965d90, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.', STARTKEY => '4', ENDKEY => '5'} 2018-12-04 20:49:14,549 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:14,549 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(7177): Opening region: {ENCODED => f54fb87a834cb50fd2027cf50bec8dde, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.', STARTKEY => '3', ENDKEY => '4'} 2018-12-04 20:49:14,550 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:14,550 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:14,550 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:14,551 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:14,552 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(7177): Opening region: {ENCODED => eea7db479f05d0bfd00980b44810efbb, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.', STARTKEY => '2', ENDKEY => '3'} 2018-12-04 20:49:14,552 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:14,553 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:14,556 DEBUG [StoreOpener-5abac36fc00b7260425322877c1d024f-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf 2018-12-04 20:49:14,556 DEBUG [StoreOpener-5abac36fc00b7260425322877c1d024f-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf 2018-12-04 20:49:14,557 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:14,557 DEBUG [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf 2018-12-04 20:49:14,558 DEBUG [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf 2018-12-04 20:49:14,558 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:14,558 DEBUG [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf 2018-12-04 20:49:14,558 DEBUG [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf 2018-12-04 20:49:14,558 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:14,559 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:14,559 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:14,559 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:14,560 DEBUG [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf 2018-12-04 20:49:14,561 DEBUG [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf 2018-12-04 20:49:14,560 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:14,562 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:14,563 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:14,564 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:14,565 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:14,565 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:14,565 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:14,578 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:14,578 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:14,579 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:14,579 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:14,579 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:14,583 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:14,584 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(998): writing seq id for 5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:14,584 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:14,584 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(998): writing seq id for f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:14,584 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for 0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:14,591 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(998): writing seq id for eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:14,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1485): Finished pid=9, ppid=7, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, target=asf910.gq1.ygridcore.net,51486,1543956539203 in 937msec, unfinishedSiblingCount=5 2018-12-04 20:49:14,655 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:49:14,655 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:49:14,655 INFO [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(1002): Opened f54fb87a834cb50fd2027cf50bec8dde; next sequenceid=2 2018-12-04 20:49:14,655 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:49:14,655 INFO [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(1002): Opened 5abac36fc00b7260425322877c1d024f; next sequenceid=2 2018-12-04 20:49:14,656 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-12-04 20:49:14,656 INFO [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(1002): Opened eea7db479f05d0bfd00980b44810efbb; next sequenceid=2 2018-12-04 20:49:14,656 INFO [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened 0cbbdc66f0b53e014d4b09cb9f965d90; next sequenceid=2 2018-12-04 20:49:14,661 INFO [PostOpenDeployTasks:5abac36fc00b7260425322877c1d024f] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:14,661 INFO [PostOpenDeployTasks:f54fb87a834cb50fd2027cf50bec8dde] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:14,662 INFO [PostOpenDeployTasks:eea7db479f05d0bfd00980b44810efbb] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:14,662 INFO [PostOpenDeployTasks:0cbbdc66f0b53e014d4b09cb9f965d90] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:14,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=1) to run queue because: pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:14,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=2) to run queue because: pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:14,670 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=0) from run queue because: queue is empty after polling out pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,670 DEBUG [PEWorker-7] assignment.RegionTransitionProcedure(387): Finishing pid=8, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,670 DEBUG [PEWorker-14] assignment.RegionTransitionProcedure(387): Finishing pid=11, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,671 INFO [PEWorker-14] assignment.RegionStateStore(200): pid=11 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=OPEN, openSeqNum=2, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,670 DEBUG [PostOpenDeployTasks:f54fb87a834cb50fd2027cf50bec8dde] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:14,672 DEBUG [PostOpenDeployTasks:5abac36fc00b7260425322877c1d024f] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:14,671 INFO [PEWorker-7] assignment.RegionStateStore(200): pid=8 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=OPEN, openSeqNum=2, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,676 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. on asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:14,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=1) to run queue because: pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:14,677 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=0) from run queue because: queue is empty after polling out pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,677 DEBUG [PostOpenDeployTasks:eea7db479f05d0bfd00980b44810efbb] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:14,674 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. on asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:14,679 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. on asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=2, pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=1) to run queue because: pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:14,681 DEBUG [PEWorker-6] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=0) from run queue because: queue is empty after polling out pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,681 DEBUG [PostOpenDeployTasks:0cbbdc66f0b53e014d4b09cb9f965d90] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:14,681 DEBUG [PEWorker-6] assignment.RegionTransitionProcedure(387): Finishing pid=12, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,682 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. on asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,683 INFO [PEWorker-6] assignment.RegionStateStore(200): pid=12 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=OPEN, openSeqNum=2, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,689 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=11, ppid=7, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 as the 18th rollback step 2018-12-04 20:49:14,689 DEBUG [PEWorker-6] procedure2.RootProcedureState(153): Add procedure pid=12, ppid=7, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 as the 19th rollback step 2018-12-04 20:49:14,691 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=8, ppid=7, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 as the 20th rollback step 2018-12-04 20:49:14,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=0, pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=7 2018-12-04 20:49:14,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=0, pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,911 INFO [PEWorker-3] procedure2.ProcedureExecutor(1485): Finished pid=13, ppid=7, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, target=asf910.gq1.ygridcore.net,34504,1543956539068 in 944msec, unfinishedSiblingCount=4 2018-12-04 20:49:14,911 DEBUG [PEWorker-9] assignment.RegionTransitionProcedure(387): Finishing pid=10, ppid=7, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,911 INFO [PEWorker-9] assignment.RegionStateStore(200): pid=10 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=OPEN, openSeqNum=2, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:14,917 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=10, ppid=7, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 as the 21th rollback step 2018-12-04 20:49:15,320 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 308ms (threshold=300ms), isSync:true, flushTotalNanos=9350ns 2018-12-04 20:49:15,321 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.BlockReceiver(422): Slow flushOrSync took 308ms (threshold=300ms), isSync:true, flushTotalNanos=9859ns 2018-12-04 20:49:15,322 INFO [PEWorker-14] procedure2.ProcedureExecutor(1485): Finished pid=11, ppid=7, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, target=asf910.gq1.ygridcore.net,34504,1543956539068 in 1.2640sec, unfinishedSiblingCount=3 2018-12-04 20:49:15,687 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=10, ppid=7, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 released the shared lock 2018-12-04 20:49:15,687 INFO [PEWorker-6] procedure2.ProcedureExecutor(1485): Finished pid=12, ppid=7, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, target=asf910.gq1.ygridcore.net,36011,1543956539302 in 1.2640sec, unfinishedSiblingCount=1 2018-12-04 20:49:15,687 INFO [PEWorker-7] procedure2.ProcedureExecutor(1485): Finished pid=8, ppid=7, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, target=asf910.gq1.ygridcore.net,51486,1543956539203 in 1.2660sec, unfinishedSiblingCount=1 2018-12-04 20:49:15,867 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:15,868 INFO [PEWorker-9] procedure2.ProcedureExecutor(1897): Finished subprocedure pid=10, resume processing parent pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:15,868 DEBUG [PEWorker-10] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:15,868 INFO [PEWorker-9] procedure2.ProcedureExecutor(1485): Finished pid=10, ppid=7, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, target=asf910.gq1.ygridcore.net,36011,1543956539302 in 1.4920sec, unfinishedSiblingCount=0 2018-12-04 20:49:15,868 DEBUG [PEWorker-10] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (7) sharedLock=0 size=0) from run queue because: pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:16,093 DEBUG [PEWorker-10] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956556093}]},"ts":1543956556093} 2018-12-04 20:49:16,099 INFO [PEWorker-10] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=ENABLED in hbase:meta 2018-12-04 20:49:16,148 DEBUG [PEWorker-10] procedure2.RootProcedureState(153): Add procedure pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 22th rollback step 2018-12-04 20:49:16,334 DEBUG [PEWorker-10] procedure2.RootProcedureState(153): Add procedure pid=7, state=SUCCESS, locked=true; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 23th rollback step 2018-12-04 20:49:16,712 DEBUG [PEWorker-10] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=7, state=SUCCESS; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:16,712 INFO [PEWorker-10] procedure2.ProcedureExecutor(1485): Finished pid=7, state=SUCCESS; CreateTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 in 4.6500sec 2018-12-04 20:49:17,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=7 2018-12-04 20:49:17,333 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3666): Operation: CREATE, Table Name: default:testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, procId: 7 completed 2018-12-04 20:49:17,334 DEBUG [Time-limited test] hbase.HBaseTestingUtility(3334): Waiting until all regions of table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 get assigned. Timeout = 60000ms 2018-12-04 20:49:17,336 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-12-04 20:49:17,356 INFO [Time-limited test] hbase.HBaseTestingUtility(3386): All regions for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 assigned to meta. Checking AM states. 2018-12-04 20:49:17,357 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-12-04 20:49:17,358 INFO [Time-limited test] hbase.HBaseTestingUtility(3406): All regions for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 assigned. 2018-12-04 20:49:17,374 INFO [Time-limited test] client.HBaseAdmin$15(919): Started disable of testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:17,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.HMaster$11(2524): Client=jenkins//67.195.81.154 disable testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:17,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=14, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:17,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=14, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:17,598 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=14, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:17,598 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=0) from run queue because: pid=14, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:17,699 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=14, state=RUNNABLE:DISABLE_TABLE_PRE_OPERATION, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 0th rollback step 2018-12-04 20:49:17,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=14 2018-12-04 20:49:17,795 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=14, state=RUNNABLE:DISABLE_TABLE_SET_DISABLING_TABLE_STATE, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 1th rollback step 2018-12-04 20:49:17,902 DEBUG [PEWorker-12] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956557902}]},"ts":1543956557902} 2018-12-04 20:49:17,906 INFO [PEWorker-12] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=DISABLING in hbase:meta 2018-12-04 20:49:17,948 INFO [PEWorker-12] procedure.DisableTableProcedure(295): Set testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 to state=DISABLING 2018-12-04 20:49:17,948 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=14, state=RUNNABLE:DISABLE_TABLE_MARK_REGIONS_OFFLINE, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 2th rollback step 2018-12-04 20:49:17,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=14 2018-12-04 20:49:18,055 INFO [PEWorker-12] procedure2.ProcedureExecutor(1758): Initialized subprocedures=[{pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203}, {pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203}, {pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302}, {pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068}, {pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302}, {pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068}] 2018-12-04 20:49:18,055 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=14, state=WAITING:DISABLE_TABLE_ADD_REPLICATION_BARRIER, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 3th rollback step 2018-12-04 20:49:18,174 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=1) to run queue because: pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 has the excusive lock access 2018-12-04 20:49:18,175 DEBUG [PEWorker-13] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,175 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=1) to run queue because: pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 has the excusive lock access 2018-12-04 20:49:18,175 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=2) to run queue because: pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 has the excusive lock access 2018-12-04 20:49:18,175 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=3) to run queue because: pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 has the excusive lock access 2018-12-04 20:49:18,175 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=4) to run queue because: pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 has the excusive lock access 2018-12-04 20:49:18,176 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=5) to run queue because: pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 has the excusive lock access 2018-12-04 20:49:18,194 INFO [PEWorker-1] procedure.MasterProcedureScheduler(741): Took xlock for pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,198 INFO [PEWorker-16] procedure.MasterProcedureScheduler(741): Took xlock for pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,199 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=2 size=0) from run queue because: queue is empty after polling out pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,200 INFO [PEWorker-15] procedure.MasterProcedureScheduler(741): Took xlock for pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,200 INFO [PEWorker-13] procedure.MasterProcedureScheduler(741): Took xlock for pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,200 INFO [PEWorker-5] procedure.MasterProcedureScheduler(741): Took xlock for pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,201 INFO [PEWorker-8] procedure.MasterProcedureScheduler(741): Took xlock for pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,253 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) to run queue because: pid=14, state=WAITING:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:18,254 INFO [PEWorker-1] assignment.RegionStateStore(200): pid=19 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,257 INFO [PEWorker-1] assignment.RegionTransitionProcedure(267): Dispatch pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,258 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 4th rollback step 2018-12-04 20:49:18,415 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.RSRpcServices(1609): Close 0cbbdc66f0b53e014d4b09cb9f965d90 without moving 2018-12-04 20:49:18,422 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing 0cbbdc66f0b53e014d4b09cb9f965d90, disabling compactions & flushes 2018-12-04 20:49:18,422 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:18,429 INFO [PEWorker-16] assignment.RegionStateStore(200): pid=17 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,429 INFO [PEWorker-15] assignment.RegionStateStore(200): pid=16 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,433 INFO [PEWorker-15] assignment.RegionTransitionProcedure(267): Dispatch pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,433 INFO [PEWorker-16] assignment.RegionTransitionProcedure(267): Dispatch pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,434 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 5th rollback step 2018-12-04 20:49:18,434 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-12-04 20:49:18,434 DEBUG [PEWorker-16] procedure2.RootProcedureState(153): Add procedure pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 6th rollback step 2018-12-04 20:49:18,436 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:18,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:18,439 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,439 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:18,440 DEBUG [PEWorker-2] assignment.RegionTransitionProcedure(387): Finishing pid=19, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,440 INFO [PEWorker-2] assignment.RegionStateStore(200): pid=19 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=CLOSED 2018-12-04 20:49:18,444 DEBUG [PEWorker-2] procedure2.RootProcedureState(153): Add procedure pid=19, ppid=14, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 7th rollback step 2018-12-04 20:49:18,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=14 2018-12-04 20:49:18,565 INFO [PEWorker-8] assignment.RegionStateStore(200): pid=18 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,565 INFO [PEWorker-5] assignment.RegionStateStore(200): pid=20 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,565 INFO [PEWorker-13] assignment.RegionStateStore(200): pid=15 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,569 INFO [PEWorker-8] assignment.RegionTransitionProcedure(267): Dispatch pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,569 INFO [PEWorker-5] assignment.RegionTransitionProcedure(267): Dispatch pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,569 INFO [PEWorker-13] assignment.RegionTransitionProcedure(267): Dispatch pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,569 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 8th rollback step 2018-12-04 20:49:18,570 DEBUG [PEWorker-13] procedure2.RootProcedureState(153): Add procedure pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 9th rollback step 2018-12-04 20:49:18,570 DEBUG [PEWorker-5] procedure2.RootProcedureState(153): Add procedure pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 10th rollback step 2018-12-04 20:49:18,588 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.RSRpcServices(1609): Close eea7db479f05d0bfd00980b44810efbb without moving 2018-12-04 20:49:18,588 INFO [RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=51486] regionserver.RSRpcServices(1609): Close 5abac36fc00b7260425322877c1d024f without moving 2018-12-04 20:49:18,588 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=51486] regionserver.RSRpcServices(1609): Close 17bf706db6019b3980612acaaf29410d without moving 2018-12-04 20:49:18,590 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1541): Closing eea7db479f05d0bfd00980b44810efbb, disabling compactions & flushes 2018-12-04 20:49:18,590 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing 5abac36fc00b7260425322877c1d024f, disabling compactions & flushes 2018-12-04 20:49:18,590 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:18,590 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:18,592 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1541): Closing 17bf706db6019b3980612acaaf29410d, disabling compactions & flushes 2018-12-04 20:49:18,592 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:18,617 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-12-04 20:49:18,618 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-12-04 20:49:18,624 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-12-04 20:49:18,624 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:18,625 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:18,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,625 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:18,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:18,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,626 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,626 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:18,626 DEBUG [PEWorker-4] assignment.RegionTransitionProcedure(387): Finishing pid=16, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,627 INFO [PEWorker-4] assignment.RegionStateStore(200): pid=16 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=CLOSED 2018-12-04 20:49:18,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:18,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:18,628 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:18,628 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:18,628 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,628 DEBUG [PEWorker-3] assignment.RegionTransitionProcedure(387): Finishing pid=17, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:18,628 INFO [PEWorker-3] assignment.RegionStateStore(200): pid=17 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=CLOSED 2018-12-04 20:49:18,631 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=16, ppid=14, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 11th rollback step 2018-12-04 20:49:18,631 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=17, ppid=14, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 12th rollback step 2018-12-04 20:49:18,641 INFO [PEWorker-2] procedure2.ProcedureExecutor(1485): Finished pid=19, ppid=14, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 in 389msec, unfinishedSiblingCount=5 2018-12-04 20:49:18,641 DEBUG [PEWorker-14] assignment.RegionTransitionProcedure(387): Finishing pid=15, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:18,641 INFO [PEWorker-14] assignment.RegionStateStore(200): pid=15 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=CLOSED 2018-12-04 20:49:18,645 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=15, ppid=14, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 13th rollback step 2018-12-04 20:49:18,721 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=34504] regionserver.RSRpcServices(1609): Close f54fb87a834cb50fd2027cf50bec8dde without moving 2018-12-04 20:49:18,721 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1609): Close 3694f6258e9e47dea826bcb208d58324 without moving 2018-12-04 20:49:18,724 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing f54fb87a834cb50fd2027cf50bec8dde, disabling compactions & flushes 2018-12-04 20:49:18,724 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1541): Closing 3694f6258e9e47dea826bcb208d58324, disabling compactions & flushes 2018-12-04 20:49:18,725 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:18,725 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:18,740 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-12-04 20:49:18,740 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-12-04 20:49:18,742 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:18,742 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:18,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=1) to run queue because: pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:18,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=2) to run queue because: pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:18,745 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:18,745 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=5 size=0) from run queue because: queue is empty after polling out pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,746 DEBUG [PEWorker-6] assignment.RegionTransitionProcedure(387): Finishing pid=20, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,746 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:18,746 DEBUG [PEWorker-7] assignment.RegionTransitionProcedure(387): Finishing pid=18, ppid=14, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:18,746 INFO [PEWorker-6] assignment.RegionStateStore(200): pid=20 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=CLOSED 2018-12-04 20:49:18,747 INFO [PEWorker-7] assignment.RegionStateStore(200): pid=18 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=CLOSED 2018-12-04 20:49:18,777 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=18, ppid=14, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 14th rollback step 2018-12-04 20:49:18,777 DEBUG [PEWorker-6] procedure2.RootProcedureState(153): Add procedure pid=20, ppid=14, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 15th rollback step 2018-12-04 20:49:19,025 INFO [PEWorker-4] procedure2.ProcedureExecutor(1485): Finished pid=16, ppid=14, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 in 576msec, unfinishedSiblingCount=2 2018-12-04 20:49:19,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1485): Finished pid=17, ppid=14, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 in 576msec, unfinishedSiblingCount=2 2018-12-04 20:49:19,025 INFO [PEWorker-14] procedure2.ProcedureExecutor(1485): Finished pid=15, ppid=14, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 in 590msec, unfinishedSiblingCount=2 2018-12-04 20:49:19,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=14 2018-12-04 20:49:19,233 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=18, ppid=14, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 released the shared lock 2018-12-04 20:49:19,233 INFO [PEWorker-6] procedure2.ProcedureExecutor(1485): Finished pid=20, ppid=14, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 in 722msec, unfinishedSiblingCount=1 2018-12-04 20:49:19,329 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=14, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:19,330 INFO [PEWorker-7] procedure2.ProcedureExecutor(1897): Finished subprocedure pid=18, resume processing parent pid=14, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:19,331 INFO [PEWorker-7] procedure2.ProcedureExecutor(1485): Finished pid=18, ppid=14, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 in 722msec, unfinishedSiblingCount=0 2018-12-04 20:49:19,331 DEBUG [PEWorker-11] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=14, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:19,331 DEBUG [PEWorker-11] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (14) sharedLock=0 size=0) from run queue because: pid=14, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:19,412 DEBUG [PEWorker-11] procedure2.RootProcedureState(153): Add procedure pid=14, state=RUNNABLE:DISABLE_TABLE_SET_DISABLED_TABLE_STATE, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 16th rollback step 2018-12-04 20:49:19,503 DEBUG [PEWorker-11] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956559503}]},"ts":1543956559503} 2018-12-04 20:49:19,512 INFO [PEWorker-11] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=DISABLED in hbase:meta 2018-12-04 20:49:19,532 INFO [PEWorker-11] procedure.DisableTableProcedure(310): Set testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 to state=DISABLED 2018-12-04 20:49:19,532 DEBUG [PEWorker-11] procedure2.RootProcedureState(153): Add procedure pid=14, state=RUNNABLE:DISABLE_TABLE_POST_OPERATION, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 17th rollback step 2018-12-04 20:49:19,620 DEBUG [PEWorker-11] procedure2.RootProcedureState(153): Add procedure pid=14, state=SUCCESS, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 18th rollback step 2018-12-04 20:49:19,755 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-12-04 20:49:19,836 DEBUG [PEWorker-11] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=14, state=SUCCESS; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:19,837 INFO [PEWorker-11] procedure2.ProcedureExecutor(1485): Finished pid=14, state=SUCCESS; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 in 2.2350sec 2018-12-04 20:49:20,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=14 2018-12-04 20:49:20,471 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3666): Operation: DISABLE, Table Name: default:testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, procId: 14 completed 2018-12-04 20:49:20,497 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1497): Client=jenkins//67.195.81.154 snapshot request for:{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:49:20,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotDescriptionUtils(266): Creation time not specified, setting to:1543956560498 (current time:1543956560498). 2018-12-04 20:49:20,499 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] zookeeper.ReadOnlyZKClient(139): Connect 0x5f489ceb to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:20,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@56463af9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:20,589 INFO [RS-EventLoopGroup-4-5] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:52067, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:20,599 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x5f489ceb to localhost:64381 2018-12-04 20:49:20,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:49:20,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotManager(584): No existing snapshot, attempting snapshot... 2018-12-04 20:49:20,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotManager(639): Table is disabled, running snapshot entirely on master. 2018-12-04 20:49:20,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=21, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:20,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=21, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:20,952 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=21, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:20,952 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (21) sharedLock=0 size=0) from run queue because: pid=21, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE held the exclusive lock 2018-12-04 20:49:20,952 DEBUG [PEWorker-9] locking.LockProcedure(309): LOCKED pid=21, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:21,123 INFO [PEWorker-9] procedure2.TimeoutExecutorThread(82): ADDED pid=21, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE; timeout=600000, timestamp=1543957161123 2018-12-04 20:49:21,123 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=21, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE as the 0th rollback step 2018-12-04 20:49:21,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotManager(641): Started snapshot: { ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:49:21,127 INFO [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(161): Running DISABLED table snapshot emptySnaptb-1543956551635 C_M_SNAPSHOT_TABLE on table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:21,139 DEBUG [Time-limited test] client.HBaseAdmin(2537): Waiting a max of 300000 ms for snapshot '{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH }'' to complete. (max 50000 ms per retry) 2018-12-04 20:49:21,139 DEBUG [Time-limited test] client.HBaseAdmin(2546): (#1) Sleeping: 250ms while waiting for snapshot completion. 2018-12-04 20:49:21,203 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|FINALIZED]]} size 0 2018-12-04 20:49:21,203 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|FINALIZED]]} size 0 2018-12-04 20:49:21,208 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741843_1019 size 118 2018-12-04 20:49:21,228 INFO [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.DisabledTableSnapshotHandler(96): Starting to write region info and WALs for regions for offline snapshot:{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=DISABLED } 2018-12-04 20:49:21,268 DEBUG [DisabledTableSnapshot-pool14-t1] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:21,271 DEBUG [DisabledTableSnapshot-pool14-t2] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:21,278 DEBUG [DisabledTableSnapshot-pool14-t1] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:21,278 DEBUG [DisabledTableSnapshot-pool14-t2] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:21,282 DEBUG [DisabledTableSnapshot-pool14-t3] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:21,289 DEBUG [DisabledTableSnapshot-pool14-t3] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:21,306 DEBUG [DisabledTableSnapshot-pool14-t4] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:21,307 DEBUG [DisabledTableSnapshot-pool14-t4] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:21,307 DEBUG [DisabledTableSnapshot-pool14-t6] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:21,307 DEBUG [DisabledTableSnapshot-pool14-t6] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:21,308 DEBUG [DisabledTableSnapshot-pool14-t5] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:21,308 DEBUG [DisabledTableSnapshot-pool14-t5] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:21,320 DEBUG [DisabledTableSnapshot-pool14-t3] snapshot.SnapshotManifest(304): No files under family: cf 2018-12-04 20:49:21,321 DEBUG [DisabledTableSnapshot-pool14-t1] snapshot.SnapshotManifest(304): No files under family: cf 2018-12-04 20:49:21,325 DEBUG [DisabledTableSnapshot-pool14-t4] snapshot.SnapshotManifest(304): No files under family: cf 2018-12-04 20:49:21,326 DEBUG [DisabledTableSnapshot-pool14-t2] snapshot.SnapshotManifest(304): No files under family: cf 2018-12-04 20:49:21,326 DEBUG [DisabledTableSnapshot-pool14-t5] snapshot.SnapshotManifest(304): No files under family: cf 2018-12-04 20:49:21,327 DEBUG [DisabledTableSnapshot-pool14-t6] snapshot.SnapshotManifest(304): No files under family: cf 2018-12-04 20:49:21,390 DEBUG [Time-limited test] client.HBaseAdmin(2552): Getting current status of snapshot from master... 2018-12-04 20:49:21,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1161): Checking to see if snapshot from request:{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } is done 2018-12-04 20:49:21,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotManager(387): Snapshoting '{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH }' is still in progress! 2018-12-04 20:49:21,434 DEBUG [Time-limited test] client.HBaseAdmin(2546): (#2) Sleeping: 500ms while waiting for snapshot completion. 2018-12-04 20:49:21,445 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:21,463 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741846_1022 size 112 2018-12-04 20:49:21,464 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741846_1022 size 112 2018-12-04 20:49:21,476 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741845_1021{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 112 2018-12-04 20:49:21,476 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741845_1021 size 112 2018-12-04 20:49:21,477 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741845_1021 size 112 2018-12-04 20:49:21,565 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741844_1020{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 111 2018-12-04 20:49:21,565 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741847_1023{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 112 2018-12-04 20:49:21,566 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741844_1020 size 111 2018-12-04 20:49:21,566 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741848_1024{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 111 2018-12-04 20:49:21,566 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:21,571 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741844_1020 size 111 2018-12-04 20:49:21,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741847_1023 size 112 2018-12-04 20:49:21,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741847_1023 size 112 2018-12-04 20:49:21,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741849_1025 size 112 2018-12-04 20:49:21,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741848_1024 size 111 2018-12-04 20:49:21,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741848_1024 size 111 2018-12-04 20:49:21,572 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741849_1025 size 112 2018-12-04 20:49:21,934 DEBUG [Time-limited test] client.HBaseAdmin(2552): Getting current status of snapshot from master... 2018-12-04 20:49:21,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1161): Checking to see if snapshot from request:{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } is done 2018-12-04 20:49:21,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotManager(387): Snapshoting '{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH }' is still in progress! 2018-12-04 20:49:21,937 DEBUG [Time-limited test] client.HBaseAdmin(2546): (#3) Sleeping: 750ms while waiting for snapshot completion. 2018-12-04 20:49:21,974 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.DisabledTableSnapshotHandler(118): Marking snapshot{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=DISABLED } as finished. 2018-12-04 20:49:21,974 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.SnapshotManifest(466): Convert to Single Snapshot Manifest 2018-12-04 20:49:21,983 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.SnapshotManifestV1(125): No regions under directory:hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/.tmp/emptySnaptb-1543956551635 2018-12-04 20:49:22,038 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:49:22,039 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:22,043 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741850_1026 size 1252 2018-12-04 20:49:22,077 INFO [IPC Server handler 5 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741849_1025 127.0.0.1:60454 127.0.0.1:33680 127.0.0.1:54375 2018-12-04 20:49:22,079 INFO [IPC Server handler 2 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741847_1023 127.0.0.1:60454 127.0.0.1:54375 127.0.0.1:33680 2018-12-04 20:49:22,082 INFO [IPC Server handler 8 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741846_1022 127.0.0.1:33680 127.0.0.1:60454 127.0.0.1:54375 2018-12-04 20:49:22,084 INFO [IPC Server handler 6 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741844_1020 127.0.0.1:33680 127.0.0.1:60454 127.0.0.1:54375 2018-12-04 20:49:22,085 INFO [IPC Server handler 4 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741845_1021 127.0.0.1:33680 127.0.0.1:60454 127.0.0.1:54375 2018-12-04 20:49:22,090 INFO [IPC Server handler 1 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741848_1024 127.0.0.1:60454 127.0.0.1:33680 127.0.0.1:54375 2018-12-04 20:49:22,122 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(253): Sentinel is done, just moving the snapshot from hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/.tmp/emptySnaptb-1543956551635 to hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/emptySnaptb-1543956551635 2018-12-04 20:49:22,140 INFO [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(215): Snapshot emptySnaptb-1543956551635 of table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 completed 2018-12-04 20:49:22,140 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(228): Launching cleanup of working dir:hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/.tmp/emptySnaptb-1543956551635 2018-12-04 20:49:22,145 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (21) sharedLock=0 size=1) to run queue because: pid=21, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE has lock 2018-12-04 20:49:22,145 DEBUG [PEWorker-10] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (21) sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=21, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:22,147 DEBUG [PEWorker-10] locking.LockProcedure(240): UNLOCKED pid=21, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:22,147 DEBUG [PEWorker-10] procedure2.RootProcedureState(153): Add procedure pid=21, state=SUCCESS, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE as the 1th rollback step 2018-12-04 20:49:22,353 DEBUG [PEWorker-10] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=21, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE released the exclusive lock 2018-12-04 20:49:22,354 INFO [PEWorker-10] procedure2.ProcedureExecutor(1485): Finished pid=21, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE in 1.4710sec 2018-12-04 20:49:22,687 DEBUG [Time-limited test] client.HBaseAdmin(2552): Getting current status of snapshot from master... 2018-12-04 20:49:22,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1161): Checking to see if snapshot from request:{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } is done 2018-12-04 20:49:22,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotManager(384): Snapshot '{ ss=emptySnaptb-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH }' has completed, notifying client. 2018-12-04 20:49:22,692 INFO [Time-limited test] client.HBaseAdmin$14(854): Started enable of testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:22,698 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.HMaster$10(2491): Client=jenkins//67.195.81.154 enable testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:22,773 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652] blockmanagement.BlockManager(3480): BLOCK* BlockManager: ask 127.0.0.1:33680 to delete [blk_1073741844_1020, blk_1073741845_1021, blk_1073741846_1022, blk_1073741847_1023, blk_1073741848_1024, blk_1073741849_1025] 2018-12-04 20:49:22,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=22, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:22,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=22, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:22,927 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=22, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:22,927 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=0) from run queue because: pid=22, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:22,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=22 2018-12-04 20:49:23,070 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=22, state=RUNNABLE:ENABLE_TABLE_PRE_OPERATION, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 0th rollback step 2018-12-04 20:49:23,196 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=22, state=RUNNABLE:ENABLE_TABLE_SET_ENABLING_TABLE_STATE, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 1th rollback step 2018-12-04 20:49:23,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=22 2018-12-04 20:49:23,299 INFO [PEWorker-12] procedure.EnableTableProcedure(372): Attempting to enable the table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:23,300 DEBUG [PEWorker-12] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956563300}]},"ts":1543956563300} 2018-12-04 20:49:23,315 INFO [PEWorker-12] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=ENABLING in hbase:meta 2018-12-04 20:49:23,357 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=22, state=RUNNABLE:ENABLE_TABLE_MARK_REGIONS_ONLINE, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 2th rollback step 2018-12-04 20:49:23,437 INFO [PEWorker-12] procedure2.ProcedureExecutor(1758): Initialized subprocedures=[{pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f}, {pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d}, {pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb}, {pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde}, {pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90}, {pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324}] 2018-12-04 20:49:23,437 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=22, state=WAITING:ENABLE_TABLE_SET_ENABLED_TABLE_STATE, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 3th rollback step 2018-12-04 20:49:23,656 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=1) to run queue because: pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f has the excusive lock access 2018-12-04 20:49:23,656 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=2) to run queue because: pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d has the excusive lock access 2018-12-04 20:49:23,657 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=3) to run queue because: pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb has the excusive lock access 2018-12-04 20:49:23,657 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=4) to run queue because: pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde has the excusive lock access 2018-12-04 20:49:23,657 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=5) to run queue because: pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 has the excusive lock access 2018-12-04 20:49:23,657 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=6) to run queue because: pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 has the excusive lock access 2018-12-04 20:49:23,659 INFO [PEWorker-1] procedure.MasterProcedureScheduler(741): Took xlock for pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:23,659 INFO [PEWorker-8] procedure.MasterProcedureScheduler(741): Took xlock for pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:23,660 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=2 size=0) from run queue because: queue is empty after polling out pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:23,660 INFO [PEWorker-5] procedure.MasterProcedureScheduler(741): Took xlock for pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:23,661 INFO [PEWorker-15] procedure.MasterProcedureScheduler(741): Took xlock for pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:23,661 INFO [PEWorker-16] procedure.MasterProcedureScheduler(741): Took xlock for pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:23,661 INFO [PEWorker-13] procedure.MasterProcedureScheduler(741): Took xlock for pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:23,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=22 2018-12-04 20:49:23,761 INFO [PEWorker-1] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:23,761 INFO [PEWorker-5] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:23,761 INFO [PEWorker-1] assignment.AssignProcedure(254): Starting pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324; rit=OFFLINE, location=asf910.gq1.ygridcore.net,34504,1543956539068; forceNewPlan=false, retain=true 2018-12-04 20:49:23,761 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) to run queue because: pid=22, state=WAITING:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:23,761 INFO [PEWorker-8] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:23,761 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 as the 4th rollback step 2018-12-04 20:49:23,761 INFO [PEWorker-5] assignment.AssignProcedure(254): Starting pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f; rit=OFFLINE, location=asf910.gq1.ygridcore.net,51486,1543956539203; forceNewPlan=false, retain=true 2018-12-04 20:49:23,761 INFO [PEWorker-8] assignment.AssignProcedure(254): Starting pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb; rit=OFFLINE, location=asf910.gq1.ygridcore.net,36011,1543956539302; forceNewPlan=false, retain=true 2018-12-04 20:49:23,762 DEBUG [PEWorker-5] procedure2.RootProcedureState(153): Add procedure pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f as the 5th rollback step 2018-12-04 20:49:23,762 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb as the 6th rollback step 2018-12-04 20:49:23,912 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 3 regions. 3 retained the pre-restart assignment. 2018-12-04 20:49:23,912 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 has lock 2018-12-04 20:49:23,913 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb has lock 2018-12-04 20:49:23,913 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=3) to run queue because: pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f has lock 2018-12-04 20:49:23,913 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:23,953 INFO [PEWorker-15] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:23,953 INFO [PEWorker-15] assignment.AssignProcedure(254): Starting pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90; rit=OFFLINE, location=asf910.gq1.ygridcore.net,36011,1543956539302; forceNewPlan=false, retain=true 2018-12-04 20:49:23,953 INFO [PEWorker-3] assignment.RegionStateStore(200): pid=25 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:23,953 INFO [PEWorker-16] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:23,953 INFO [PEWorker-13] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:23,954 INFO [PEWorker-16] assignment.AssignProcedure(254): Starting pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde; rit=OFFLINE, location=asf910.gq1.ygridcore.net,34504,1543956539068; forceNewPlan=false, retain=true 2018-12-04 20:49:23,953 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 as the 7th rollback step 2018-12-04 20:49:23,953 INFO [PEWorker-14] assignment.RegionStateStore(200): pid=28 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:23,953 INFO [PEWorker-4] assignment.RegionStateStore(200): pid=23 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:23,954 DEBUG [PEWorker-16] procedure2.RootProcedureState(153): Add procedure pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde as the 8th rollback step 2018-12-04 20:49:23,954 INFO [PEWorker-13] assignment.AssignProcedure(254): Starting pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d; rit=OFFLINE, location=asf910.gq1.ygridcore.net,51486,1543956539203; forceNewPlan=false, retain=true 2018-12-04 20:49:23,955 DEBUG [PEWorker-13] procedure2.RootProcedureState(153): Add procedure pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d as the 9th rollback step 2018-12-04 20:49:23,958 INFO [PEWorker-3] assignment.RegionTransitionProcedure(267): Dispatch pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:23,958 INFO [PEWorker-4] assignment.RegionTransitionProcedure(267): Dispatch pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:23,958 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb as the 10th rollback step 2018-12-04 20:49:23,958 INFO [PEWorker-14] assignment.RegionTransitionProcedure(267): Dispatch pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:23,958 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f as the 11th rollback step 2018-12-04 20:49:23,958 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 as the 12th rollback step 2018-12-04 20:49:24,104 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 3 regions. 3 retained the pre-restart assignment. 2018-12-04 20:49:24,104 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde has lock 2018-12-04 20:49:24,105 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 has lock 2018-12-04 20:49:24,105 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=3) to run queue because: pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d has lock 2018-12-04 20:49:24,105 INFO [PEWorker-6] assignment.RegionStateStore(200): pid=24 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,110 INFO [PEWorker-6] assignment.RegionTransitionProcedure(267): Dispatch pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:24,110 DEBUG [PEWorker-6] procedure2.RootProcedureState(153): Add procedure pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d as the 13th rollback step 2018-12-04 20:49:24,110 DEBUG [PEWorker-11] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:24,110 INFO [PEWorker-11] assignment.RegionStateStore(200): pid=26 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,110 INFO [PEWorker-7] assignment.RegionStateStore(200): pid=27 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,112 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:24,114 INFO [PEWorker-11] assignment.RegionTransitionProcedure(267): Dispatch pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:24,115 DEBUG [PEWorker-11] procedure2.RootProcedureState(153): Add procedure pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde as the 14th rollback step 2018-12-04 20:49:24,115 INFO [PEWorker-7] assignment.RegionTransitionProcedure(267): Dispatch pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:24,116 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 as the 15th rollback step 2018-12-04 20:49:24,150 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:24,150 INFO [RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=51486] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:24,156 INFO [RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=51486] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:24,157 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => 5abac36fc00b7260425322877c1d024f, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.', STARTKEY => '', ENDKEY => '1'} 2018-12-04 20:49:24,157 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(7177): Opening region: {ENCODED => 17bf706db6019b3980612acaaf29410d, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.', STARTKEY => '1', ENDKEY => '2'} 2018-12-04 20:49:24,158 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:24,158 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:24,158 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:24,158 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:24,159 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(7177): Opening region: {ENCODED => 3694f6258e9e47dea826bcb208d58324, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.', STARTKEY => '5', ENDKEY => ''} 2018-12-04 20:49:24,159 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:24,159 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:24,162 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(7177): Opening region: {ENCODED => eea7db479f05d0bfd00980b44810efbb, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.', STARTKEY => '2', ENDKEY => '3'} 2018-12-04 20:49:24,163 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:24,163 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:24,165 DEBUG [StoreOpener-5abac36fc00b7260425322877c1d024f-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf 2018-12-04 20:49:24,165 DEBUG [StoreOpener-17bf706db6019b3980612acaaf29410d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf 2018-12-04 20:49:24,165 DEBUG [StoreOpener-5abac36fc00b7260425322877c1d024f-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf 2018-12-04 20:49:24,165 DEBUG [StoreOpener-17bf706db6019b3980612acaaf29410d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf 2018-12-04 20:49:24,166 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:24,166 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:24,167 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:24,167 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:24,168 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:24,168 DEBUG [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf 2018-12-04 20:49:24,169 DEBUG [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf 2018-12-04 20:49:24,169 DEBUG [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf 2018-12-04 20:49:24,169 DEBUG [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf 2018-12-04 20:49:24,170 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:24,170 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:24,170 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:24,172 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:24,172 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:24,174 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:24,174 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:24,174 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:24,176 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:24,176 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:24,178 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:24,180 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:24,182 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:24,183 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(998): writing seq id for eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:24,184 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:24,187 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for 5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:24,187 INFO [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(1002): Opened eea7db479f05d0bfd00980b44810efbb; next sequenceid=5 2018-12-04 20:49:24,188 INFO [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened 5abac36fc00b7260425322877c1d024f; next sequenceid=5 2018-12-04 20:49:24,188 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:24,189 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(998): writing seq id for 3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:24,206 INFO [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(1002): Opened 3694f6258e9e47dea826bcb208d58324; next sequenceid=5 2018-12-04 20:49:24,214 INFO [PostOpenDeployTasks:5abac36fc00b7260425322877c1d024f] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:24,214 INFO [PostOpenDeployTasks:3694f6258e9e47dea826bcb208d58324] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:24,215 INFO [PostOpenDeployTasks:eea7db479f05d0bfd00980b44810efbb] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:24,214 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(998): writing seq id for 17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:24,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=5, pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=5, pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f has lock 2018-12-04 20:49:24,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 has lock 2018-12-04 20:49:24,216 DEBUG [PEWorker-10] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:24,216 DEBUG [PEWorker-9] assignment.RegionTransitionProcedure(387): Finishing pid=28, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,217 INFO [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(1002): Opened 17bf706db6019b3980612acaaf29410d; next sequenceid=5 2018-12-04 20:49:24,217 INFO [PEWorker-9] assignment.RegionStateStore(200): pid=28 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=OPEN, openSeqNum=5, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,217 DEBUG [PEWorker-10] assignment.RegionTransitionProcedure(387): Finishing pid=23, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,217 DEBUG [PostOpenDeployTasks:5abac36fc00b7260425322877c1d024f] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:24,217 DEBUG [PostOpenDeployTasks:3694f6258e9e47dea826bcb208d58324] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:24,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=5, pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,220 INFO [PEWorker-10] assignment.RegionStateStore(200): pid=23 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=OPEN, openSeqNum=5, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,219 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. on asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb has lock 2018-12-04 20:49:24,221 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. on asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,222 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:24,222 DEBUG [PostOpenDeployTasks:eea7db479f05d0bfd00980b44810efbb] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:24,222 INFO [PostOpenDeployTasks:17bf706db6019b3980612acaaf29410d] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:24,226 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. on asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,227 DEBUG [PEWorker-2] assignment.RegionTransitionProcedure(387): Finishing pid=25, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,227 INFO [PEWorker-2] assignment.RegionStateStore(200): pid=25 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=OPEN, openSeqNum=5, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=5, pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d has lock 2018-12-04 20:49:24,233 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=28, ppid=22, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 as the 16th rollback step 2018-12-04 20:49:24,233 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:24,234 DEBUG [PostOpenDeployTasks:17bf706db6019b3980612acaaf29410d] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:24,234 DEBUG [PEWorker-12] assignment.RegionTransitionProcedure(387): Finishing pid=24, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,234 DEBUG [PEWorker-10] procedure2.RootProcedureState(153): Add procedure pid=23, ppid=22, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f as the 17th rollback step 2018-12-04 20:49:24,234 INFO [PEWorker-12] assignment.RegionStateStore(200): pid=24 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=OPEN, openSeqNum=5, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,234 DEBUG [PEWorker-2] procedure2.RootProcedureState(153): Add procedure pid=25, ppid=22, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb as the 18th rollback step 2018-12-04 20:49:24,234 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. on asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:24,248 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=24, ppid=22, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d as the 19th rollback step 2018-12-04 20:49:24,266 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:24,267 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:24,272 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => f54fb87a834cb50fd2027cf50bec8dde, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.', STARTKEY => '3', ENDKEY => '4'} 2018-12-04 20:49:24,272 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:24,273 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:24,276 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(7177): Opening region: {ENCODED => 0cbbdc66f0b53e014d4b09cb9f965d90, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.', STARTKEY => '4', ENDKEY => '5'} 2018-12-04 20:49:24,277 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:24,277 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:24,287 DEBUG [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf 2018-12-04 20:49:24,287 DEBUG [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf 2018-12-04 20:49:24,290 DEBUG [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf 2018-12-04 20:49:24,291 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:24,296 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:24,289 DEBUG [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf 2018-12-04 20:49:24,298 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:24,299 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:24,300 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:24,300 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:24,301 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:24,302 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:24,308 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:24,308 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:24,313 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:24,314 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(998): writing seq id for 0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:24,315 INFO [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(1002): Opened 0cbbdc66f0b53e014d4b09cb9f965d90; next sequenceid=5 2018-12-04 20:49:24,315 INFO [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened f54fb87a834cb50fd2027cf50bec8dde; next sequenceid=5 2018-12-04 20:49:24,336 INFO [PostOpenDeployTasks:f54fb87a834cb50fd2027cf50bec8dde] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:24,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=5, pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde has lock 2018-12-04 20:49:24,347 INFO [PostOpenDeployTasks:0cbbdc66f0b53e014d4b09cb9f965d90] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:24,347 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:24,348 DEBUG [PostOpenDeployTasks:f54fb87a834cb50fd2027cf50bec8dde] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:24,348 DEBUG [PEWorker-1] assignment.RegionTransitionProcedure(387): Finishing pid=26, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,348 INFO [PEWorker-1] assignment.RegionStateStore(200): pid=26 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=OPEN, openSeqNum=5, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=5, pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 has lock 2018-12-04 20:49:24,350 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:24,352 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. on asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:24,353 DEBUG [PEWorker-5] assignment.RegionTransitionProcedure(387): Finishing pid=27, ppid=22, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,353 INFO [PEWorker-5] assignment.RegionStateStore(200): pid=27 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=OPEN, openSeqNum=5, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,353 DEBUG [PostOpenDeployTasks:0cbbdc66f0b53e014d4b09cb9f965d90] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:24,355 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. on asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:24,373 DEBUG [PEWorker-5] procedure2.RootProcedureState(153): Add procedure pid=27, ppid=22, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 as the 20th rollback step 2018-12-04 20:49:24,374 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=26, ppid=22, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde as the 21th rollback step 2018-12-04 20:49:24,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=22 2018-12-04 20:49:24,526 INFO [PEWorker-9] procedure2.ProcedureExecutor(1485): Finished pid=28, ppid=22, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 in 796msec, unfinishedSiblingCount=5 2018-12-04 20:49:24,526 INFO [PEWorker-2] procedure2.ProcedureExecutor(1485): Finished pid=25, ppid=22, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb in 797msec, unfinishedSiblingCount=4 2018-12-04 20:49:24,526 INFO [PEWorker-12] procedure2.ProcedureExecutor(1485): Finished pid=24, ppid=22, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d in 811msec, unfinishedSiblingCount=3 2018-12-04 20:49:24,527 INFO [PEWorker-10] procedure2.ProcedureExecutor(1485): Finished pid=23, ppid=22, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f in 797msec, unfinishedSiblingCount=2 2018-12-04 20:49:24,866 INFO [PEWorker-5] procedure2.ProcedureExecutor(1485): Finished pid=27, ppid=22, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 in 936msec, unfinishedSiblingCount=1 2018-12-04 20:49:24,867 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=26, ppid=22, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde released the shared lock 2018-12-04 20:49:24,953 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=22, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:24,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1897): Finished subprocedure pid=26, resume processing parent pid=22, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:24,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1485): Finished pid=26, ppid=22, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde in 937msec, unfinishedSiblingCount=0 2018-12-04 20:49:24,954 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=22, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:24,954 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (22) sharedLock=0 size=0) from run queue because: pid=22, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:25,061 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956565060}]},"ts":1543956565060} 2018-12-04 20:49:25,065 INFO [PEWorker-1] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=ENABLED in hbase:meta 2018-12-04 20:49:25,073 INFO [PEWorker-1] procedure.EnableTableProcedure(390): Table 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635' was successfully enabled. 2018-12-04 20:49:25,074 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=22, state=RUNNABLE:ENABLE_TABLE_POST_OPERATION, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 22th rollback step 2018-12-04 20:49:25,142 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=22, state=SUCCESS, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 23th rollback step 2018-12-04 20:49:25,370 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=22, state=SUCCESS; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:25,370 INFO [PEWorker-1] procedure2.ProcedureExecutor(1485): Finished pid=22, state=SUCCESS; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 in 2.4420sec 2018-12-04 20:49:25,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=22 2018-12-04 20:49:25,706 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3666): Operation: ENABLE, Table Name: default:testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, procId: 22 completed 2018-12-04 20:49:25,774 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652] blockmanagement.BlockManager(3480): BLOCK* BlockManager: ask 127.0.0.1:60454 to delete [blk_1073741844_1020, blk_1073741845_1021, blk_1073741846_1022, blk_1073741847_1023, blk_1073741848_1024, blk_1073741849_1025] 2018-12-04 20:49:25,861 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51486] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:25,866 INFO [RS-EventLoopGroup-5-4] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:60044, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:25,871 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51486] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:25,879 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:25,887 INFO [RS-EventLoopGroup-3-4] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:54172, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:25,888 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:25,895 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:25,910 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-12-04 20:49:25,910 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-12-04 20:49:25,910 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2018-12-04 20:49:25,920 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:25,971 ERROR [Time-limited test] hbase.HBaseTestingUtility(2442): No region info for row hbase:namespace 2018-12-04 20:49:25,971 ERROR [Time-limited test] hbase.HBaseTestingUtility(2442): No region info for row testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:25,972 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.{ENCODED => 5abac36fc00b7260425322877c1d024f, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.', STARTKEY => '', ENDKEY => '1'} 2018-12-04 20:49:25,972 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.{ENCODED => 17bf706db6019b3980612acaaf29410d, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.', STARTKEY => '1', ENDKEY => '2'} 2018-12-04 20:49:25,972 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.{ENCODED => eea7db479f05d0bfd00980b44810efbb, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.', STARTKEY => '2', ENDKEY => '3'} 2018-12-04 20:49:25,972 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.{ENCODED => f54fb87a834cb50fd2027cf50bec8dde, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.', STARTKEY => '3', ENDKEY => '4'} 2018-12-04 20:49:25,973 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.{ENCODED => 0cbbdc66f0b53e014d4b09cb9f965d90, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.', STARTKEY => '4', ENDKEY => '5'} 2018-12-04 20:49:25,973 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.{ENCODED => 3694f6258e9e47dea826bcb208d58324, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.', STARTKEY => '5', ENDKEY => ''} 2018-12-04 20:49:25,973 DEBUG [Time-limited test] hbase.HBaseTestingUtility(2490): Found 6 rows for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:25,973 DEBUG [Time-limited test] hbase.HBaseTestingUtility(2493): FirstRow=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:25,977 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-12-04 20:49:26,047 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '1', inclusive 2018-12-04 20:49:26,050 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '2', inclusive 2018-12-04 20:49:26,054 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '3', inclusive 2018-12-04 20:49:26,060 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '4', inclusive 2018-12-04 20:49:26,064 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '5', inclusive 2018-12-04 20:49:26,109 INFO [Time-limited test] client.HBaseAdmin$15(919): Started disable of testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:26,110 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.HMaster$11(2524): Client=jenkins//67.195.81.154 disable testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:26,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=29, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:26,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=29, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:26,311 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=29, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:26,311 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=0 size=0) from run queue because: pid=29, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:26,416 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=29, state=RUNNABLE:DISABLE_TABLE_PRE_OPERATION, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 0th rollback step 2018-12-04 20:49:26,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=29 2018-12-04 20:49:26,475 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=29, state=RUNNABLE:DISABLE_TABLE_SET_DISABLING_TABLE_STATE, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 1th rollback step 2018-12-04 20:49:26,552 DEBUG [PEWorker-8] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956566552}]},"ts":1543956566552} 2018-12-04 20:49:26,561 INFO [PEWorker-8] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=DISABLING in hbase:meta 2018-12-04 20:49:26,572 INFO [PEWorker-8] procedure.DisableTableProcedure(295): Set testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 to state=DISABLING 2018-12-04 20:49:26,573 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=29, state=RUNNABLE:DISABLE_TABLE_MARK_REGIONS_OFFLINE, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 2th rollback step 2018-12-04 20:49:26,627 INFO [PEWorker-8] procedure2.ProcedureExecutor(1758): Initialized subprocedures=[{pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203}, {pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203}, {pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302}, {pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068}, {pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302}, {pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068}] 2018-12-04 20:49:26,627 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=29, state=WAITING:DISABLE_TABLE_ADD_REPLICATION_BARRIER, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 3th rollback step 2018-12-04 20:49:26,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=29 2018-12-04 20:49:26,686 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=0 size=1) to run queue because: pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 has the excusive lock access 2018-12-04 20:49:26,687 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=0 size=2) to run queue because: pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 has the excusive lock access 2018-12-04 20:49:26,687 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=0 size=3) to run queue because: pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 has the excusive lock access 2018-12-04 20:49:26,687 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=0 size=4) to run queue because: pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 has the excusive lock access 2018-12-04 20:49:26,687 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=0 size=5) to run queue because: pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 has the excusive lock access 2018-12-04 20:49:26,689 INFO [PEWorker-13] procedure.MasterProcedureScheduler(741): Took xlock for pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:26,689 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=1 size=0) from run queue because: queue is empty after polling out pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:26,690 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=1 size=1) to run queue because: pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 has the excusive lock access 2018-12-04 20:49:26,690 INFO [PEWorker-3] procedure.MasterProcedureScheduler(741): Took xlock for pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:26,693 INFO [PEWorker-15] procedure.MasterProcedureScheduler(741): Took xlock for pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:26,693 INFO [PEWorker-16] procedure.MasterProcedureScheduler(741): Took xlock for pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:26,693 INFO [PEWorker-4] procedure.MasterProcedureScheduler(741): Took xlock for pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:26,693 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=5 size=0) from run queue because: queue is empty after polling out pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:26,693 INFO [PEWorker-14] procedure.MasterProcedureScheduler(741): Took xlock for pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:26,770 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) to run queue because: pid=29, state=WAITING:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:26,770 INFO [PEWorker-13] assignment.RegionStateStore(200): pid=32 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:26,770 INFO [PEWorker-16] assignment.RegionStateStore(200): pid=33 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:26,770 INFO [PEWorker-4] assignment.RegionStateStore(200): pid=30 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:26,770 INFO [PEWorker-3] assignment.RegionStateStore(200): pid=31 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:26,771 INFO [PEWorker-15] assignment.RegionStateStore(200): pid=34 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:26,775 INFO [PEWorker-13] assignment.RegionTransitionProcedure(267): Dispatch pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:26,775 INFO [PEWorker-4] assignment.RegionTransitionProcedure(267): Dispatch pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:26,775 INFO [PEWorker-16] assignment.RegionTransitionProcedure(267): Dispatch pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:26,775 DEBUG [PEWorker-13] procedure2.RootProcedureState(153): Add procedure pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 4th rollback step 2018-12-04 20:49:26,775 INFO [PEWorker-3] assignment.RegionTransitionProcedure(267): Dispatch pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:26,775 INFO [PEWorker-15] assignment.RegionTransitionProcedure(267): Dispatch pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:26,776 DEBUG [PEWorker-16] procedure2.RootProcedureState(153): Add procedure pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 5th rollback step 2018-12-04 20:49:26,776 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 6th rollback step 2018-12-04 20:49:26,776 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 7th rollback step 2018-12-04 20:49:26,777 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 8th rollback step 2018-12-04 20:49:26,926 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.RSRpcServices(1609): Close eea7db479f05d0bfd00980b44810efbb without moving 2018-12-04 20:49:26,927 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=51486] regionserver.RSRpcServices(1609): Close 5abac36fc00b7260425322877c1d024f without moving 2018-12-04 20:49:26,927 INFO [RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=36011] regionserver.RSRpcServices(1609): Close 0cbbdc66f0b53e014d4b09cb9f965d90 without moving 2018-12-04 20:49:26,934 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1541): Closing 5abac36fc00b7260425322877c1d024f, disabling compactions & flushes 2018-12-04 20:49:26,934 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:26,934 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing 0cbbdc66f0b53e014d4b09cb9f965d90, disabling compactions & flushes 2018-12-04 20:49:26,936 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1609): Close f54fb87a834cb50fd2027cf50bec8dde without moving 2018-12-04 20:49:26,936 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1541): Closing eea7db479f05d0bfd00980b44810efbb, disabling compactions & flushes 2018-12-04 20:49:26,936 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=51486] regionserver.RSRpcServices(1609): Close 17bf706db6019b3980612acaaf29410d without moving 2018-12-04 20:49:26,938 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1541): Closing 17bf706db6019b3980612acaaf29410d, disabling compactions & flushes 2018-12-04 20:49:26,938 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:26,938 INFO [PEWorker-14] assignment.RegionStateStore(200): pid=35 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=CLOSING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:26,939 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=1.37 KB heapSize=3.16 KB 2018-12-04 20:49:26,941 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:26,937 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=2.75 KB heapSize=6.11 KB 2018-12-04 20:49:26,936 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:26,959 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=2.22 KB heapSize=4.98 KB 2018-12-04 20:49:26,960 INFO [PEWorker-14] assignment.RegionTransitionProcedure(267): Dispatch pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:26,960 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 9th rollback step 2018-12-04 20:49:26,942 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=1.90 KB heapSize=4.28 KB 2018-12-04 20:49:26,948 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1541): Closing f54fb87a834cb50fd2027cf50bec8dde, disabling compactions & flushes 2018-12-04 20:49:26,966 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:26,970 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=2.62 KB heapSize=5.83 KB 2018-12-04 20:49:27,133 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=34504] regionserver.RSRpcServices(1609): Close 3694f6258e9e47dea826bcb208d58324 without moving 2018-12-04 20:49:27,136 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing 3694f6258e9e47dea826bcb208d58324, disabling compactions & flushes 2018-12-04 20:49:27,136 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:27,136 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=21.85 KB heapSize=47.17 KB 2018-12-04 20:49:27,145 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:27,157 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741852_1028{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 6946 2018-12-04 20:49:27,157 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741852_1028 size 6946 2018-12-04 20:49:27,158 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741852_1028 size 6946 2018-12-04 20:49:27,162 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741851_1027 size 7694 2018-12-04 20:49:27,163 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741851_1027 size 7694 2018-12-04 20:49:27,166 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.62 KB at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/.tmp/cf/ffcc935b84c24155a2ff3a4e328a2c16 2018-12-04 20:49:27,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=29 2018-12-04 20:49:27,173 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:27,180 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:27,181 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:27,200 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:27,201 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:27,201 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:27,202 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.75 KB at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/.tmp/cf/c49c415510d34e33b204433bd5297b6c 2018-12-04 20:49:27,202 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.22 KB at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/.tmp/cf/b530bd7bb42341e99ea7d7a0184faff9 2018-12-04 20:49:27,210 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741853_1029{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:49:27,210 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741853_1029{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:49:27,211 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741853_1029{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:49:27,212 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=1.37 KB at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/.tmp/cf/439a631464a84405b356608f6150d86b 2018-12-04 20:49:27,274 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:49:27,275 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:49:27,275 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:27,277 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=21.85 KB at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/.tmp/cf/7af8d7b38aaf457abd95560952948487 2018-12-04 20:49:27,360 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/.tmp/cf/b530bd7bb42341e99ea7d7a0184faff9 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf/b530bd7bb42341e99ea7d7a0184faff9 2018-12-04 20:49:27,360 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/.tmp/cf/7af8d7b38aaf457abd95560952948487 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf/7af8d7b38aaf457abd95560952948487 2018-12-04 20:49:27,360 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/.tmp/cf/ffcc935b84c24155a2ff3a4e328a2c16 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf/ffcc935b84c24155a2ff3a4e328a2c16 2018-12-04 20:49:27,361 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/.tmp/cf/c49c415510d34e33b204433bd5297b6c as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/c49c415510d34e33b204433bd5297b6c 2018-12-04 20:49:27,366 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/.tmp/cf/439a631464a84405b356608f6150d86b as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf/439a631464a84405b356608f6150d86b 2018-12-04 20:49:27,381 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/c49c415510d34e33b204433bd5297b6c, entries=42, sequenceid=8, filesize=7.6 K 2018-12-04 20:49:27,382 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf/ffcc935b84c24155a2ff3a4e328a2c16, entries=40, sequenceid=8, filesize=7.5 K 2018-12-04 20:49:27,384 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf/7af8d7b38aaf457abd95560952948487, entries=334, sequenceid=8, filesize=27.5 K 2018-12-04 20:49:27,384 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf/b530bd7bb42341e99ea7d7a0184faff9, entries=34, sequenceid=8, filesize=7.1 K 2018-12-04 20:49:27,390 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf/439a631464a84405b356608f6150d86b, entries=21, sequenceid=8, filesize=6.2 K 2018-12-04 20:49:27,397 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2816): Finished flush of dataSize ~2.62 KB/2678, heapSize ~5.84 KB/5976, currentSize=0 B/0 for f54fb87a834cb50fd2027cf50bec8dde in 431ms, sequenceid=8, compaction requested=false 2018-12-04 20:49:27,398 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2816): Finished flush of dataSize ~21.85 KB/22374, heapSize ~47.18 KB/48312, currentSize=0 B/0 for 3694f6258e9e47dea826bcb208d58324 in 262ms, sequenceid=8, compaction requested=false 2018-12-04 20:49:27,399 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(2816): Finished flush of dataSize ~1.37 KB/1405, heapSize ~3.16 KB/3240, currentSize=0 B/0 for 17bf706db6019b3980612acaaf29410d in 460ms, sequenceid=8, compaction requested=false 2018-12-04 20:49:27,399 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2816): Finished flush of dataSize ~2.75 KB/2812, heapSize ~6.12 KB/6264, currentSize=0 B/0 for 5abac36fc00b7260425322877c1d024f in 462ms, sequenceid=8, compaction requested=false 2018-12-04 20:49:27,399 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2816): Finished flush of dataSize ~2.22 KB/2276, heapSize ~4.99 KB/5112, currentSize=0 B/0 for 0cbbdc66f0b53e014d4b09cb9f965d90 in 440ms, sequenceid=8, compaction requested=false 2018-12-04 20:49:27,448 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=4 2018-12-04 20:49:27,448 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=4 2018-12-04 20:49:27,449 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:27,450 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=4 2018-12-04 20:49:27,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:27,452 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:27,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:27,452 DEBUG [PEWorker-6] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:27,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:27,452 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:27,452 DEBUG [PEWorker-6] assignment.RegionTransitionProcedure(387): Finishing pid=30, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:27,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 has lock 2018-12-04 20:49:27,453 INFO [PEWorker-6] assignment.RegionStateStore(200): pid=30 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=CLOSED 2018-12-04 20:49:27,453 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:27,453 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:27,453 DEBUG [PEWorker-12] assignment.RegionTransitionProcedure(387): Finishing pid=31, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203; rit=CLOSING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:27,453 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=4 2018-12-04 20:49:27,454 INFO [PEWorker-12] assignment.RegionStateStore(200): pid=31 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=CLOSED 2018-12-04 20:49:27,546 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=4 2018-12-04 20:49:27,547 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:27,548 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:27,548 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:27,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:27,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:27,554 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:27,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:27,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:27,554 DEBUG [PEWorker-13] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:27,559 DEBUG [PEWorker-6] procedure2.RootProcedureState(153): Add procedure pid=30, ppid=29, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 10th rollback step 2018-12-04 20:49:27,559 DEBUG [PEWorker-13] assignment.RegionTransitionProcedure(387): Finishing pid=34, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:27,559 INFO [PEWorker-13] assignment.RegionStateStore(200): pid=34 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=CLOSED 2018-12-04 20:49:27,560 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=31, ppid=29, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 as the 11th rollback step 2018-12-04 20:49:27,562 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=1.90 KB at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/.tmp/cf/af88ed5650204580a5099262ce13c6a7 2018-12-04 20:49:27,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:27,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 has lock 2018-12-04 20:49:27,563 DEBUG [PEWorker-16] assignment.RegionTransitionProcedure(387): Finishing pid=33, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:27,564 INFO [PEWorker-16] assignment.RegionStateStore(200): pid=33 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=CLOSED 2018-12-04 20:49:27,563 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:27,564 DEBUG [PEWorker-4] assignment.RegionTransitionProcedure(387): Finishing pid=35, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068; rit=CLOSING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:27,565 INFO [PEWorker-4] assignment.RegionStateStore(200): pid=35 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=CLOSED 2018-12-04 20:49:27,567 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:27,567 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:27,568 DEBUG [PEWorker-16] procedure2.RootProcedureState(153): Add procedure pid=33, ppid=29, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 12th rollback step 2018-12-04 20:49:27,570 DEBUG [PEWorker-13] procedure2.RootProcedureState(153): Add procedure pid=34, ppid=29, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 13th rollback step 2018-12-04 20:49:27,570 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=35, ppid=29, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 as the 14th rollback step 2018-12-04 20:49:27,584 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/.tmp/cf/af88ed5650204580a5099262ce13c6a7 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf/af88ed5650204580a5099262ce13c6a7 2018-12-04 20:49:27,608 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf/af88ed5650204580a5099262ce13c6a7, entries=29, sequenceid=8, filesize=6.8 K 2018-12-04 20:49:27,613 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2816): Finished flush of dataSize ~1.90 KB/1941, heapSize ~4.29 KB/4392, currentSize=0 B/0 for eea7db479f05d0bfd00980b44810efbb in 672ms, sequenceid=8, compaction requested=false 2018-12-04 20:49:27,633 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=4 2018-12-04 20:49:27,635 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:27,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report CLOSED seqId=-1, pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:27,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 has lock 2018-12-04 20:49:27,639 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:27,640 DEBUG [PEWorker-15] assignment.RegionTransitionProcedure(387): Finishing pid=32, ppid=29, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302; rit=CLOSING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:27,640 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:27,640 INFO [PEWorker-15] assignment.RegionStateStore(200): pid=32 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=CLOSED 2018-12-04 20:49:27,651 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=32, ppid=29, state=SUCCESS, locked=true; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 as the 15th rollback step 2018-12-04 20:49:27,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=29 2018-12-04 20:49:27,934 INFO [PEWorker-6] procedure2.ProcedureExecutor(1485): Finished pid=30, ppid=29, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f, server=asf910.gq1.ygridcore.net,51486,1543956539203 in 932msec, unfinishedSiblingCount=5 2018-12-04 20:49:27,935 INFO [PEWorker-12] procedure2.ProcedureExecutor(1485): Finished pid=31, ppid=29, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d, server=asf910.gq1.ygridcore.net,51486,1543956539203 in 933msec, unfinishedSiblingCount=4 2018-12-04 20:49:28,102 INFO [PEWorker-13] procedure2.ProcedureExecutor(1485): Finished pid=34, ppid=29, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90, server=asf910.gq1.ygridcore.net,36011,1543956539302 in 943msec, unfinishedSiblingCount=2 2018-12-04 20:49:28,102 INFO [PEWorker-16] procedure2.ProcedureExecutor(1485): Finished pid=33, ppid=29, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde, server=asf910.gq1.ygridcore.net,34504,1543956539068 in 941msec, unfinishedSiblingCount=2 2018-12-04 20:49:28,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1485): Finished pid=35, ppid=29, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324, server=asf910.gq1.ygridcore.net,34504,1543956539068 in 943msec, unfinishedSiblingCount=1 2018-12-04 20:49:28,102 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=32, ppid=29, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 released the shared lock 2018-12-04 20:49:28,203 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=29, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:28,203 INFO [PEWorker-15] procedure2.ProcedureExecutor(1897): Finished subprocedure pid=32, resume processing parent pid=29, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:28,203 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=29, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:28,203 INFO [PEWorker-15] procedure2.ProcedureExecutor(1485): Finished pid=32, ppid=29, state=SUCCESS; UnassignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb, server=asf910.gq1.ygridcore.net,36011,1543956539302 in 1.0230sec, unfinishedSiblingCount=0 2018-12-04 20:49:28,204 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (29) sharedLock=0 size=0) from run queue because: pid=29, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:28,327 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=29, state=RUNNABLE:DISABLE_TABLE_SET_DISABLED_TABLE_STATE, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 16th rollback step 2018-12-04 20:49:28,444 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956568444}]},"ts":1543956568444} 2018-12-04 20:49:28,447 INFO [PEWorker-3] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=DISABLED in hbase:meta 2018-12-04 20:49:28,519 INFO [PEWorker-3] procedure.DisableTableProcedure(310): Set testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 to state=DISABLED 2018-12-04 20:49:28,519 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=29, state=RUNNABLE:DISABLE_TABLE_POST_OPERATION, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 17th rollback step 2018-12-04 20:49:28,622 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=29, state=SUCCESS, locked=true; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 18th rollback step 2018-12-04 20:49:28,774 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=29, state=SUCCESS; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:28,774 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652] blockmanagement.BlockManager(3480): BLOCK* BlockManager: ask 127.0.0.1:54375 to delete [blk_1073741844_1020, blk_1073741845_1021, blk_1073741846_1022, blk_1073741847_1023, blk_1073741848_1024, blk_1073741849_1025] 2018-12-04 20:49:28,775 INFO [PEWorker-3] procedure2.ProcedureExecutor(1485): Finished pid=29, state=SUCCESS; DisableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 in 2.4980sec 2018-12-04 20:49:29,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=29 2018-12-04 20:49:29,187 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3666): Operation: DISABLE, Table Name: default:testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, procId: 29 completed 2018-12-04 20:49:29,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1497): Client=jenkins//67.195.81.154 snapshot request for:{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:49:29,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotDescriptionUtils(266): Creation time not specified, setting to:1543956569189 (current time:1543956569189). 2018-12-04 20:49:29,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] zookeeper.ReadOnlyZKClient(139): Connect 0x23456cd8 to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:29,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35876301, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:29,249 INFO [RS-EventLoopGroup-4-6] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:52280, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:29,253 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x23456cd8 to localhost:64381 2018-12-04 20:49:29,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:49:29,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotManager(584): No existing snapshot, attempting snapshot... 2018-12-04 20:49:29,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotManager(639): Table is disabled, running snapshot entirely on master. 2018-12-04 20:49:29,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=36, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:29,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=36, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:29,447 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=36, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:29,447 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (36) sharedLock=0 size=0) from run queue because: pid=36, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE held the exclusive lock 2018-12-04 20:49:29,447 DEBUG [PEWorker-14] locking.LockProcedure(309): LOCKED pid=36, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:29,503 INFO [PEWorker-14] procedure2.TimeoutExecutorThread(82): ADDED pid=36, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE; timeout=600000, timestamp=1543957169503 2018-12-04 20:49:29,503 INFO [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(161): Running DISABLED table snapshot snaptb0-1543956551635 C_M_SNAPSHOT_TABLE on table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:29,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotManager(641): Started snapshot: { ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:49:29,503 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=36, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE as the 0th rollback step 2018-12-04 20:49:29,504 DEBUG [Time-limited test] client.HBaseAdmin(2537): Waiting a max of 300000 ms for snapshot '{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH }'' to complete. (max 50000 ms per retry) 2018-12-04 20:49:29,504 DEBUG [Time-limited test] client.HBaseAdmin(2546): (#1) Sleeping: 250ms while waiting for snapshot completion. 2018-12-04 20:49:29,519 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741857_1033{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:29,519 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741857_1033{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:29,547 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741857_1033 size 114 2018-12-04 20:49:29,553 INFO [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.DisabledTableSnapshotHandler(96): Starting to write region info and WALs for regions for offline snapshot:{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=DISABLED } 2018-12-04 20:49:29,557 DEBUG [DisabledTableSnapshot-pool20-t3] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:29,558 DEBUG [DisabledTableSnapshot-pool20-t3] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:29,559 DEBUG [DisabledTableSnapshot-pool20-t2] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:29,559 DEBUG [DisabledTableSnapshot-pool20-t6] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:29,559 DEBUG [DisabledTableSnapshot-pool20-t2] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:29,559 DEBUG [DisabledTableSnapshot-pool20-t6] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:29,561 DEBUG [DisabledTableSnapshot-pool20-t4] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:29,561 DEBUG [DisabledTableSnapshot-pool20-t4] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:29,562 DEBUG [DisabledTableSnapshot-pool20-t5] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:29,562 DEBUG [DisabledTableSnapshot-pool20-t5] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:29,562 DEBUG [DisabledTableSnapshot-pool20-t1] snapshot.SnapshotManifest(283): Storing region-info for snapshot. 2018-12-04 20:49:29,562 DEBUG [DisabledTableSnapshot-pool20-t1] snapshot.SnapshotManifest(288): Creating references for hfiles 2018-12-04 20:49:29,572 DEBUG [DisabledTableSnapshot-pool20-t1] snapshot.SnapshotManifest(341): Adding snapshot references for [hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf/7af8d7b38aaf457abd95560952948487] hfiles 2018-12-04 20:49:29,572 DEBUG [DisabledTableSnapshot-pool20-t3] snapshot.SnapshotManifest(341): Adding snapshot references for [hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf/b530bd7bb42341e99ea7d7a0184faff9] hfiles 2018-12-04 20:49:29,572 DEBUG [DisabledTableSnapshot-pool20-t1] snapshot.SnapshotManifest(349): Adding reference for hfile (1/1): hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf/7af8d7b38aaf457abd95560952948487 2018-12-04 20:49:29,573 DEBUG [DisabledTableSnapshot-pool20-t3] snapshot.SnapshotManifest(349): Adding reference for hfile (1/1): hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf/b530bd7bb42341e99ea7d7a0184faff9 2018-12-04 20:49:29,574 DEBUG [DisabledTableSnapshot-pool20-t4] snapshot.SnapshotManifest(341): Adding snapshot references for [hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf/439a631464a84405b356608f6150d86b] hfiles 2018-12-04 20:49:29,574 DEBUG [DisabledTableSnapshot-pool20-t4] snapshot.SnapshotManifest(349): Adding reference for hfile (1/1): hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf/439a631464a84405b356608f6150d86b 2018-12-04 20:49:29,578 DEBUG [DisabledTableSnapshot-pool20-t6] snapshot.SnapshotManifest(341): Adding snapshot references for [hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/c49c415510d34e33b204433bd5297b6c] hfiles 2018-12-04 20:49:29,578 DEBUG [DisabledTableSnapshot-pool20-t2] snapshot.SnapshotManifest(341): Adding snapshot references for [hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf/ffcc935b84c24155a2ff3a4e328a2c16] hfiles 2018-12-04 20:49:29,578 DEBUG [DisabledTableSnapshot-pool20-t2] snapshot.SnapshotManifest(349): Adding reference for hfile (1/1): hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf/ffcc935b84c24155a2ff3a4e328a2c16 2018-12-04 20:49:29,578 DEBUG [DisabledTableSnapshot-pool20-t6] snapshot.SnapshotManifest(349): Adding reference for hfile (1/1): hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/c49c415510d34e33b204433bd5297b6c 2018-12-04 20:49:29,580 DEBUG [DisabledTableSnapshot-pool20-t5] snapshot.SnapshotManifest(341): Adding snapshot references for [hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf/af88ed5650204580a5099262ce13c6a7] hfiles 2018-12-04 20:49:29,580 DEBUG [DisabledTableSnapshot-pool20-t5] snapshot.SnapshotManifest(349): Adding reference for hfile (1/1): hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf/af88ed5650204580a5099262ce13c6a7 2018-12-04 20:49:29,663 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741859_1035{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 156 2018-12-04 20:49:29,664 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741859_1035 size 156 2018-12-04 20:49:29,665 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741860_1036{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 157 2018-12-04 20:49:29,665 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741858_1034{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 157 2018-12-04 20:49:29,665 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741861_1037{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:29,666 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741860_1036 size 157 2018-12-04 20:49:29,666 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741861_1037{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:29,666 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-12-04 20:49:29,667 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741858_1034 size 157 2018-12-04 20:49:29,668 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741861_1037 size 157 2018-12-04 20:49:29,675 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741860_1036 size 157 2018-12-04 20:49:29,675 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741858_1034 size 157 2018-12-04 20:49:29,675 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741859_1035 size 156 2018-12-04 20:49:29,679 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741862_1038{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:29,679 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741862_1038{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:29,681 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741862_1038 size 157 2018-12-04 20:49:29,694 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741863_1039{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:29,700 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741863_1039{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:29,700 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741863_1039{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW]]} size 0 2018-12-04 20:49:29,755 DEBUG [Time-limited test] client.HBaseAdmin(2552): Getting current status of snapshot from master... 2018-12-04 20:49:29,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1161): Checking to see if snapshot from request:{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } is done 2018-12-04 20:49:29,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotManager(387): Snapshoting '{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH }' is still in progress! 2018-12-04 20:49:29,760 DEBUG [Time-limited test] client.HBaseAdmin(2546): (#2) Sleeping: 500ms while waiting for snapshot completion. 2018-12-04 20:49:30,067 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.DisabledTableSnapshotHandler(118): Marking snapshot{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=DISABLED } as finished. 2018-12-04 20:49:30,068 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.SnapshotManifest(466): Convert to Single Snapshot Manifest 2018-12-04 20:49:30,069 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.SnapshotManifestV1(125): No regions under directory:hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/.tmp/snaptb0-1543956551635 2018-12-04 20:49:30,102 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741864_1040{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:49:30,103 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741864_1040{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:49:30,103 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741864_1040{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|FINALIZED]]} size 0 2018-12-04 20:49:30,105 INFO [IPC Server handler 0 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741863_1039 127.0.0.1:54375 127.0.0.1:60454 127.0.0.1:33680 2018-12-04 20:49:30,107 INFO [IPC Server handler 4 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741858_1034 127.0.0.1:33680 127.0.0.1:60454 127.0.0.1:54375 2018-12-04 20:49:30,108 INFO [IPC Server handler 9 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741861_1037 127.0.0.1:54375 127.0.0.1:33680 127.0.0.1:60454 2018-12-04 20:49:30,109 INFO [IPC Server handler 1 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741859_1035 127.0.0.1:60454 127.0.0.1:33680 127.0.0.1:54375 2018-12-04 20:49:30,111 INFO [IPC Server handler 7 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741860_1036 127.0.0.1:33680 127.0.0.1:54375 127.0.0.1:60454 2018-12-04 20:49:30,112 INFO [IPC Server handler 6 on 45471] blockmanagement.BlockManager(1168): BLOCK* addToInvalidates: blk_1073741862_1038 127.0.0.1:54375 127.0.0.1:33680 127.0.0.1:60454 2018-12-04 20:49:30,136 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(253): Sentinel is done, just moving the snapshot from hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/.tmp/snaptb0-1543956551635 to hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/snaptb0-1543956551635 2018-12-04 20:49:30,148 INFO [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(215): Snapshot snaptb0-1543956551635 of table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 completed 2018-12-04 20:49:30,148 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] snapshot.TakeSnapshotHandler(228): Launching cleanup of working dir:hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/.hbase-snapshot/.tmp/snaptb0-1543956551635 2018-12-04 20:49:30,151 DEBUG [MASTER_TABLE_OPERATIONS-master/asf910:0-0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (36) sharedLock=0 size=1) to run queue because: pid=36, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE has lock 2018-12-04 20:49:30,151 DEBUG [PEWorker-11] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (36) sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=36, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:30,152 DEBUG [PEWorker-11] locking.LockProcedure(240): UNLOCKED pid=36, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:30,152 DEBUG [PEWorker-11] procedure2.RootProcedureState(153): Add procedure pid=36, state=SUCCESS, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE as the 1th rollback step 2018-12-04 20:49:30,261 DEBUG [Time-limited test] client.HBaseAdmin(2552): Getting current status of snapshot from master... 2018-12-04 20:49:30,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1161): Checking to see if snapshot from request:{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } is done 2018-12-04 20:49:30,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotManager(384): Snapshot '{ ss=snaptb0-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH }' has completed, notifying client. 2018-12-04 20:49:30,263 INFO [Time-limited test] client.HBaseAdmin$14(854): Started enable of testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:30,264 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.HMaster$10(2491): Client=jenkins//67.195.81.154 enable testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:30,308 DEBUG [PEWorker-11] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=36, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE released the exclusive lock 2018-12-04 20:49:30,308 INFO [PEWorker-11] procedure2.ProcedureExecutor(1485): Finished pid=36, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE in 871msec 2018-12-04 20:49:30,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=37, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:30,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=37, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:30,370 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=37, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:30,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=37 2018-12-04 20:49:30,370 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=0) from run queue because: pid=37, state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:30,451 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=37, state=RUNNABLE:ENABLE_TABLE_PRE_OPERATION, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 0th rollback step 2018-12-04 20:49:30,603 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=37, state=RUNNABLE:ENABLE_TABLE_SET_ENABLING_TABLE_STATE, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 1th rollback step 2018-12-04 20:49:30,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=37 2018-12-04 20:49:30,766 INFO [PEWorker-9] procedure.EnableTableProcedure(372): Attempting to enable the table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:30,766 DEBUG [PEWorker-9] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956570766}]},"ts":1543956570766} 2018-12-04 20:49:30,770 INFO [PEWorker-9] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=ENABLING in hbase:meta 2018-12-04 20:49:30,789 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=37, state=RUNNABLE:ENABLE_TABLE_MARK_REGIONS_ONLINE, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 2th rollback step 2018-12-04 20:49:30,890 INFO [PEWorker-9] procedure2.ProcedureExecutor(1758): Initialized subprocedures=[{pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f}, {pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d}, {pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb}, {pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde}, {pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90}, {pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324}] 2018-12-04 20:49:30,890 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=37, state=WAITING:ENABLE_TABLE_SET_ENABLED_TABLE_STATE, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 3th rollback step 2018-12-04 20:49:30,979 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=1) to run queue because: pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f has the excusive lock access 2018-12-04 20:49:30,979 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=2) to run queue because: pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d has the excusive lock access 2018-12-04 20:49:30,979 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=3) to run queue because: pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb has the excusive lock access 2018-12-04 20:49:30,979 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=4) to run queue because: pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde has the excusive lock access 2018-12-04 20:49:30,979 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=5) to run queue because: pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 has the excusive lock access 2018-12-04 20:49:30,980 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=6) to run queue because: pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 has the excusive lock access 2018-12-04 20:49:30,982 INFO [PEWorker-8] procedure.MasterProcedureScheduler(741): Took xlock for pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:30,982 DEBUG [PEWorker-12] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=1 size=0) from run queue because: queue is empty after polling out pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:30,982 INFO [PEWorker-12] procedure.MasterProcedureScheduler(741): Took xlock for pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:30,982 INFO [PEWorker-5] procedure.MasterProcedureScheduler(741): Took xlock for pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:30,983 INFO [PEWorker-10] procedure.MasterProcedureScheduler(741): Took xlock for pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:30,983 INFO [PEWorker-1] procedure.MasterProcedureScheduler(741): Took xlock for pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:30,983 INFO [PEWorker-6] procedure.MasterProcedureScheduler(741): Took xlock for pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,067 INFO [PEWorker-5] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,067 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) to run queue because: pid=37, state=WAITING:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:31,067 INFO [PEWorker-10] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,067 INFO [PEWorker-12] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,067 INFO [PEWorker-8] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,067 INFO [PEWorker-1] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,068 INFO [PEWorker-8] assignment.AssignProcedure(254): Starting pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb; rit=OFFLINE, location=asf910.gq1.ygridcore.net,36011,1543956539302; forceNewPlan=false, retain=true 2018-12-04 20:49:31,068 INFO [PEWorker-12] assignment.AssignProcedure(254): Starting pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f; rit=OFFLINE, location=asf910.gq1.ygridcore.net,51486,1543956539203; forceNewPlan=false, retain=true 2018-12-04 20:49:31,068 INFO [PEWorker-10] assignment.AssignProcedure(254): Starting pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324; rit=OFFLINE, location=asf910.gq1.ygridcore.net,34504,1543956539068; forceNewPlan=false, retain=true 2018-12-04 20:49:31,068 INFO [PEWorker-5] assignment.AssignProcedure(254): Starting pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90; rit=OFFLINE, location=asf910.gq1.ygridcore.net,36011,1543956539302; forceNewPlan=false, retain=true 2018-12-04 20:49:31,068 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb as the 4th rollback step 2018-12-04 20:49:31,068 INFO [PEWorker-1] assignment.AssignProcedure(254): Starting pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde; rit=OFFLINE, location=asf910.gq1.ygridcore.net,34504,1543956539068; forceNewPlan=false, retain=true 2018-12-04 20:49:31,069 DEBUG [PEWorker-5] procedure2.RootProcedureState(153): Add procedure pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 as the 5th rollback step 2018-12-04 20:49:31,069 DEBUG [PEWorker-1] procedure2.RootProcedureState(153): Add procedure pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde as the 6th rollback step 2018-12-04 20:49:31,069 DEBUG [PEWorker-10] procedure2.RootProcedureState(153): Add procedure pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 as the 7th rollback step 2018-12-04 20:49:31,069 DEBUG [PEWorker-12] procedure2.RootProcedureState(153): Add procedure pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f as the 8th rollback step 2018-12-04 20:49:31,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=37 2018-12-04 20:49:31,219 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 5 regions. 5 retained the pre-restart assignment. 2018-12-04 20:49:31,219 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 has lock 2018-12-04 20:49:31,219 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=2) to run queue because: pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde has lock 2018-12-04 20:49:31,219 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=3) to run queue because: pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 has lock 2018-12-04 20:49:31,219 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=4) to run queue because: pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb has lock 2018-12-04 20:49:31,219 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=5) to run queue because: pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f has lock 2018-12-04 20:49:31,220 DEBUG [PEWorker-15] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:31,333 INFO [PEWorker-6] assignment.AssignProcedure(249): Setting lastHost as the region location asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,333 INFO [PEWorker-6] assignment.AssignProcedure(254): Starting pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_QUEUE, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d; rit=OFFLINE, location=asf910.gq1.ygridcore.net,51486,1543956539203; forceNewPlan=false, retain=true 2018-12-04 20:49:31,333 DEBUG [PEWorker-6] procedure2.RootProcedureState(153): Add procedure pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d as the 9th rollback step 2018-12-04 20:49:31,334 INFO [PEWorker-2] assignment.RegionStateStore(200): pid=41 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,334 INFO [PEWorker-13] assignment.RegionStateStore(200): pid=38 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,334 INFO [PEWorker-15] assignment.RegionStateStore(200): pid=43 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,334 INFO [PEWorker-16] assignment.RegionStateStore(200): pid=40 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,334 INFO [PEWorker-4] assignment.RegionStateStore(200): pid=42 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,339 INFO [PEWorker-13] assignment.RegionTransitionProcedure(267): Dispatch pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:31,339 INFO [PEWorker-15] assignment.RegionTransitionProcedure(267): Dispatch pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:31,339 DEBUG [PEWorker-13] procedure2.RootProcedureState(153): Add procedure pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f as the 10th rollback step 2018-12-04 20:49:31,339 INFO [PEWorker-4] assignment.RegionTransitionProcedure(267): Dispatch pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:31,339 INFO [PEWorker-2] assignment.RegionTransitionProcedure(267): Dispatch pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:31,339 DEBUG [PEWorker-15] procedure2.RootProcedureState(153): Add procedure pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 as the 11th rollback step 2018-12-04 20:49:31,340 DEBUG [PEWorker-4] procedure2.RootProcedureState(153): Add procedure pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 as the 12th rollback step 2018-12-04 20:49:31,340 DEBUG [PEWorker-2] procedure2.RootProcedureState(153): Add procedure pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde as the 13th rollback step 2018-12-04 20:49:31,341 INFO [PEWorker-16] assignment.RegionTransitionProcedure(267): Dispatch pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:31,342 DEBUG [PEWorker-16] procedure2.RootProcedureState(153): Add procedure pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb as the 14th rollback step 2018-12-04 20:49:31,484 INFO [master/asf910:0] balancer.BaseLoadBalancer(1531): Reassigned 1 regions. 1 retained the pre-restart assignment. 2018-12-04 20:49:31,484 DEBUG [master/asf910:0] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d has lock 2018-12-04 20:49:31,485 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,485 INFO [PEWorker-3] assignment.RegionStateStore(200): pid=39 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=OPENING, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,491 INFO [RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=51486] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:31,491 INFO [PEWorker-3] assignment.RegionTransitionProcedure(267): Dispatch pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,491 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:31,491 INFO [RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=36011] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:31,491 DEBUG [PEWorker-3] procedure2.RootProcedureState(153): Add procedure pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d as the 15th rollback step 2018-12-04 20:49:31,496 INFO [RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=36011] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:31,496 INFO [RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:31,496 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(7177): Opening region: {ENCODED => 3694f6258e9e47dea826bcb208d58324, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.', STARTKEY => '5', ENDKEY => ''} 2018-12-04 20:49:31,496 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(7177): Opening region: {ENCODED => f54fb87a834cb50fd2027cf50bec8dde, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.', STARTKEY => '3', ENDKEY => '4'} 2018-12-04 20:49:31,496 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => eea7db479f05d0bfd00980b44810efbb, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.', STARTKEY => '2', ENDKEY => '3'} 2018-12-04 20:49:31,496 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(7177): Opening region: {ENCODED => 5abac36fc00b7260425322877c1d024f, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.', STARTKEY => '', ENDKEY => '1'} 2018-12-04 20:49:31,496 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(7177): Opening region: {ENCODED => 0cbbdc66f0b53e014d4b09cb9f965d90, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.', STARTKEY => '4', ENDKEY => '5'} 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:31,498 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:31,497 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:31,502 DEBUG [StoreOpener-5abac36fc00b7260425322877c1d024f-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf 2018-12-04 20:49:31,502 DEBUG [StoreOpener-5abac36fc00b7260425322877c1d024f-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf 2018-12-04 20:49:31,503 DEBUG [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf 2018-12-04 20:49:31,503 DEBUG [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf 2018-12-04 20:49:31,503 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:31,503 DEBUG [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf 2018-12-04 20:49:31,503 DEBUG [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf 2018-12-04 20:49:31,503 DEBUG [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf 2018-12-04 20:49:31,504 DEBUG [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf 2018-12-04 20:49:31,503 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:31,503 DEBUG [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf 2018-12-04 20:49:31,504 DEBUG [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf 2018-12-04 20:49:31,504 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:31,504 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:31,504 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:31,505 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:31,505 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:31,505 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:31,505 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:31,505 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:31,521 DEBUG [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] regionserver.HStore(584): loaded hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf/b530bd7bb42341e99ea7d7a0184faff9 2018-12-04 20:49:31,521 DEBUG [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] regionserver.HStore(584): loaded hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf/af88ed5650204580a5099262ce13c6a7 2018-12-04 20:49:31,522 INFO [StoreOpener-eea7db479f05d0bfd00980b44810efbb-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:31,522 INFO [StoreOpener-0cbbdc66f0b53e014d4b09cb9f965d90-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:31,522 DEBUG [StoreOpener-5abac36fc00b7260425322877c1d024f-1] regionserver.HStore(584): loaded hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/c49c415510d34e33b204433bd5297b6c 2018-12-04 20:49:31,522 INFO [StoreOpener-5abac36fc00b7260425322877c1d024f-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:31,524 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:31,524 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:31,525 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:31,527 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:31,527 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:31,527 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:31,530 DEBUG [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] regionserver.HStore(584): loaded hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf/ffcc935b84c24155a2ff3a4e328a2c16 2018-12-04 20:49:31,530 INFO [StoreOpener-f54fb87a834cb50fd2027cf50bec8dde-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:31,532 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:31,534 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:31,560 DEBUG [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] regionserver.HStore(584): loaded hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf/7af8d7b38aaf457abd95560952948487 2018-12-04 20:49:31,560 INFO [StoreOpener-3694f6258e9e47dea826bcb208d58324-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:31,562 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(998): writing seq id for 5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:31,562 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:31,563 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(998): writing seq id for 0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:31,563 INFO [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(1002): Opened 5abac36fc00b7260425322877c1d024f; next sequenceid=12 2018-12-04 20:49:31,563 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(998): writing seq id for f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:31,564 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:31,564 INFO [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(1002): Opened 0cbbdc66f0b53e014d4b09cb9f965d90; next sequenceid=12 2018-12-04 20:49:31,564 INFO [RS_OPEN_REGION-regionserver/asf910:0-2] regionserver.HRegion(1002): Opened f54fb87a834cb50fd2027cf50bec8dde; next sequenceid=12 2018-12-04 20:49:31,565 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:31,566 INFO [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened eea7db479f05d0bfd00980b44810efbb; next sequenceid=12 2018-12-04 20:49:31,566 INFO [PostOpenDeployTasks:5abac36fc00b7260425322877c1d024f] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:31,566 INFO [PostOpenDeployTasks:f54fb87a834cb50fd2027cf50bec8dde] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:31,567 INFO [PostOpenDeployTasks:0cbbdc66f0b53e014d4b09cb9f965d90] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:31,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=12, pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,567 INFO [PostOpenDeployTasks:eea7db479f05d0bfd00980b44810efbb] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:31,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f has lock 2018-12-04 20:49:31,567 DEBUG [PEWorker-7] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f 2018-12-04 20:49:31,568 DEBUG [PostOpenDeployTasks:5abac36fc00b7260425322877c1d024f] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:31,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=12, pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,569 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. on asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,568 DEBUG [PEWorker-7] assignment.RegionTransitionProcedure(387): Finishing pid=38, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=12, pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 has lock 2018-12-04 20:49:31,570 INFO [PEWorker-7] assignment.RegionStateStore(200): pid=38 updating hbase:meta row=5abac36fc00b7260425322877c1d024f, regionState=OPEN, openSeqNum=12, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,571 DEBUG [PEWorker-10] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 2018-12-04 20:49:31,571 DEBUG [PostOpenDeployTasks:0cbbdc66f0b53e014d4b09cb9f965d90] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:49:31,571 DEBUG [PEWorker-10] assignment.RegionTransitionProcedure(387): Finishing pid=42, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde has lock 2018-12-04 20:49:31,575 DEBUG [PEWorker-14] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde 2018-12-04 20:49:31,575 DEBUG [PEWorker-14] assignment.RegionTransitionProcedure(387): Finishing pid=41, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,574 INFO [PEWorker-10] assignment.RegionStateStore(200): pid=42 updating hbase:meta row=0cbbdc66f0b53e014d4b09cb9f965d90, regionState=OPEN, openSeqNum=12, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,575 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. on asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=12, pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb has lock 2018-12-04 20:49:31,575 INFO [PEWorker-14] assignment.RegionStateStore(200): pid=41 updating hbase:meta row=f54fb87a834cb50fd2027cf50bec8dde, regionState=OPEN, openSeqNum=12, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,576 DEBUG [PostOpenDeployTasks:eea7db479f05d0bfd00980b44810efbb] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:49:31,575 DEBUG [PostOpenDeployTasks:f54fb87a834cb50fd2027cf50bec8dde] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:49:31,578 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. on asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,576 DEBUG [PEWorker-11] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb 2018-12-04 20:49:31,586 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(998): writing seq id for 3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:31,587 DEBUG [PEWorker-10] procedure2.RootProcedureState(153): Add procedure pid=42, ppid=37, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 as the 16th rollback step 2018-12-04 20:49:31,587 INFO [RS_OPEN_REGION-regionserver/asf910:0-1] regionserver.HRegion(1002): Opened 3694f6258e9e47dea826bcb208d58324; next sequenceid=12 2018-12-04 20:49:31,589 INFO [PostOpenDeployTasks:3694f6258e9e47dea826bcb208d58324] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:31,589 DEBUG [PEWorker-7] procedure2.RootProcedureState(153): Add procedure pid=38, ppid=37, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f as the 17th rollback step 2018-12-04 20:49:31,590 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-2] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. on asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=12, pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 has lock 2018-12-04 20:49:31,591 DEBUG [PostOpenDeployTasks:3694f6258e9e47dea826bcb208d58324] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:49:31,593 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-1] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. on asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,593 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 2018-12-04 20:49:31,593 DEBUG [PEWorker-9] assignment.RegionTransitionProcedure(387): Finishing pid=43, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324; rit=OPENING, location=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,593 INFO [PEWorker-9] assignment.RegionStateStore(200): pid=43 updating hbase:meta row=3694f6258e9e47dea826bcb208d58324, regionState=OPEN, openSeqNum=12, regionLocation=asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:49:31,595 DEBUG [PEWorker-14] procedure2.RootProcedureState(153): Add procedure pid=41, ppid=37, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde as the 18th rollback step 2018-12-04 20:49:31,596 DEBUG [PEWorker-9] procedure2.RootProcedureState(153): Add procedure pid=43, ppid=37, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 as the 19th rollback step 2018-12-04 20:49:31,617 DEBUG [PEWorker-11] assignment.RegionTransitionProcedure(387): Finishing pid=40, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb; rit=OPENING, location=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,618 INFO [PEWorker-11] assignment.RegionStateStore(200): pid=40 updating hbase:meta row=eea7db479f05d0bfd00980b44810efbb, regionState=OPEN, openSeqNum=12, regionLocation=asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:49:31,621 DEBUG [PEWorker-11] procedure2.RootProcedureState(153): Add procedure pid=40, ppid=37, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb as the 20th rollback step 2018-12-04 20:49:31,644 INFO [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.RSRpcServices(1987): Open testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:31,650 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(7177): Opening region: {ENCODED => 17bf706db6019b3980612acaaf29410d, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.', STARTKEY => '1', ENDKEY => '2'} 2018-12-04 20:49:31,650 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,650 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(833): Instantiated testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-12-04 20:49:31,668 DEBUG [StoreOpener-17bf706db6019b3980612acaaf29410d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf 2018-12-04 20:49:31,668 DEBUG [StoreOpener-17bf706db6019b3980612acaaf29410d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf 2018-12-04 20:49:31,669 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] hfile.CacheConfig(237): Created cacheConfig for cf: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-12-04 20:49:31,669 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-12-04 20:49:31,681 DEBUG [StoreOpener-17bf706db6019b3980612acaaf29410d-1] regionserver.HStore(584): loaded hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf/439a631464a84405b356608f6150d86b 2018-12-04 20:49:31,682 INFO [StoreOpener-17bf706db6019b3980612acaaf29410d-1] regionserver.HStore(332): Store=cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2018-12-04 20:49:31,683 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,685 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(4571): Found 0 recovered edits file(s) under hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,688 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(998): writing seq id for 17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,689 INFO [RS_OPEN_REGION-regionserver/asf910:0-0] regionserver.HRegion(1002): Opened 17bf706db6019b3980612acaaf29410d; next sequenceid=12 2018-12-04 20:49:31,690 INFO [PostOpenDeployTasks:17bf706db6019b3980612acaaf29410d] regionserver.HRegionServer(2177): Post open deploy tasks for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:31,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] assignment.RegionTransitionProcedure(290): Received report OPENED seqId=12, pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=1) to run queue because: pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d has lock 2018-12-04 20:49:31,692 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=6 size=0) from run queue because: queue is empty after polling out pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d 2018-12-04 20:49:31,693 DEBUG [PEWorker-5] assignment.RegionTransitionProcedure(387): Finishing pid=39, ppid=37, state=RUNNABLE:REGION_TRANSITION_FINISH, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d; rit=OPENING, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,693 INFO [PEWorker-5] assignment.RegionStateStore(200): pid=39 updating hbase:meta row=17bf706db6019b3980612acaaf29410d, regionState=OPEN, openSeqNum=12, regionLocation=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,695 DEBUG [PostOpenDeployTasks:17bf706db6019b3980612acaaf29410d] regionserver.HRegionServer(2201): Finished post open deploy task for testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:49:31,696 DEBUG [PEWorker-5] procedure2.RootProcedureState(153): Add procedure pid=39, ppid=37, state=SUCCESS, locked=true; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d as the 21th rollback step 2018-12-04 20:49:31,701 DEBUG [RS_OPEN_REGION-regionserver/asf910:0-0] handler.OpenRegionHandler(127): Opened testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. on asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:31,775 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652] blockmanagement.BlockManager(3480): BLOCK* BlockManager: ask 127.0.0.1:33680 to delete [blk_1073741858_1034, blk_1073741859_1035, blk_1073741860_1036, blk_1073741861_1037, blk_1073741862_1038, blk_1073741863_1039] 2018-12-04 20:49:31,860 INFO [PEWorker-10] procedure2.ProcedureExecutor(1485): Finished pid=42, ppid=37, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=0cbbdc66f0b53e014d4b09cb9f965d90 in 704msec, unfinishedSiblingCount=4 2018-12-04 20:49:31,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1485): Finished pid=39, ppid=37, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=17bf706db6019b3980612acaaf29410d in 813msec, unfinishedSiblingCount=1 2018-12-04 20:49:31,860 INFO [PEWorker-14] procedure2.ProcedureExecutor(1485): Finished pid=41, ppid=37, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=f54fb87a834cb50fd2027cf50bec8dde in 712msec, unfinishedSiblingCount=4 2018-12-04 20:49:31,861 INFO [PEWorker-7] procedure2.ProcedureExecutor(1485): Finished pid=38, ppid=37, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=5abac36fc00b7260425322877c1d024f in 706msec, unfinishedSiblingCount=1 2018-12-04 20:49:31,861 INFO [PEWorker-11] procedure2.ProcedureExecutor(1485): Finished pid=40, ppid=37, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=eea7db479f05d0bfd00980b44810efbb in 738msec, unfinishedSiblingCount=1 2018-12-04 20:49:31,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=37 2018-12-04 20:49:32,020 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=43, ppid=37, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 released the shared lock 2018-12-04 20:49:32,079 DEBUG [PEWorker-9] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=37, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:32,079 INFO [PEWorker-9] procedure2.ProcedureExecutor(1897): Finished subprocedure pid=43, resume processing parent pid=37, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:32,079 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=37, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:32,079 INFO [PEWorker-9] procedure2.ProcedureExecutor(1485): Finished pid=43, ppid=37, state=SUCCESS; AssignProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, region=3694f6258e9e47dea826bcb208d58324 in 713msec, unfinishedSiblingCount=0 2018-12-04 20:49:32,079 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=true (37) sharedLock=0 size=0) from run queue because: pid=37, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 held the exclusive lock 2018-12-04 20:49:32,135 DEBUG [PEWorker-8] hbase.MetaTableAccessor(2153): Put {"totalColumns":1,"row":"testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1543956572135}]},"ts":1543956572135} 2018-12-04 20:49:32,138 INFO [PEWorker-8] hbase.MetaTableAccessor(1673): Updated tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, state=ENABLED in hbase:meta 2018-12-04 20:49:32,148 INFO [PEWorker-8] procedure.EnableTableProcedure(390): Table 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635' was successfully enabled. 2018-12-04 20:49:32,148 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=37, state=RUNNABLE:ENABLE_TABLE_POST_OPERATION, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 22th rollback step 2018-12-04 20:49:32,234 DEBUG [PEWorker-8] procedure2.RootProcedureState(153): Add procedure pid=37, state=SUCCESS, locked=true; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 as the 23th rollback step 2018-12-04 20:49:32,370 DEBUG [PEWorker-8] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) to run queue because: pid=37, state=SUCCESS; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 released the exclusive lock 2018-12-04 20:49:32,371 INFO [PEWorker-8] procedure2.ProcedureExecutor(1485): Finished pid=37, state=SUCCESS; EnableTableProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 in 1.9700sec 2018-12-04 20:49:33,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] master.MasterRpcServices(1179): Checking to see if procedure is done pid=37 2018-12-04 20:49:33,129 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3666): Operation: ENABLE, Table Name: default:testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, procId: 37 completed 2018-12-04 20:49:33,150 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51486] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:33,152 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=51486] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:33,153 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:33,153 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:33,154 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=36011] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:33,159 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=34504] regionserver.HRegion(8403): writing data to region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. with WAL disabled. Data may be lost in the event of a crash. 2018-12-04 20:49:33,171 ERROR [Time-limited test] hbase.HBaseTestingUtility(2442): No region info for row hbase:namespace 2018-12-04 20:49:33,172 ERROR [Time-limited test] hbase.HBaseTestingUtility(2442): No region info for row testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:33,172 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.{ENCODED => 5abac36fc00b7260425322877c1d024f, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.', STARTKEY => '', ENDKEY => '1'} 2018-12-04 20:49:33,172 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.{ENCODED => 17bf706db6019b3980612acaaf29410d, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d.', STARTKEY => '1', ENDKEY => '2'} 2018-12-04 20:49:33,173 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.{ENCODED => eea7db479f05d0bfd00980b44810efbb, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.', STARTKEY => '2', ENDKEY => '3'} 2018-12-04 20:49:33,173 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.{ENCODED => f54fb87a834cb50fd2027cf50bec8dde, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde.', STARTKEY => '3', ENDKEY => '4'} 2018-12-04 20:49:33,173 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.{ENCODED => 0cbbdc66f0b53e014d4b09cb9f965d90, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90.', STARTKEY => '4', ENDKEY => '5'} 2018-12-04 20:49:33,173 INFO [Time-limited test] hbase.HBaseTestingUtility(2448): getMetaTableRows: row -> testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.{ENCODED => 3694f6258e9e47dea826bcb208d58324, NAME => 'testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324.', STARTKEY => '5', ENDKEY => ''} 2018-12-04 20:49:33,173 DEBUG [Time-limited test] hbase.HBaseTestingUtility(2490): Found 6 rows for table testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 2018-12-04 20:49:33,173 DEBUG [Time-limited test] hbase.HBaseTestingUtility(2493): FirstRow=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:33,176 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-12-04 20:49:33,272 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '1', inclusive 2018-12-04 20:49:33,284 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '2', inclusive 2018-12-04 20:49:33,293 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '3', inclusive 2018-12-04 20:49:33,301 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '4', inclusive 2018-12-04 20:49:33,314 DEBUG [Time-limited test] client.ClientScanner(242): Advancing internal scanner to startKey at '5', inclusive 2018-12-04 20:49:33,446 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] master.HMaster$3(1860): Client=jenkins//67.195.81.154 split testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:49:33,470 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.StoreUtils(123): cannot split hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/c49c415510d34e33b204433bd5297b6c because midkey is the same as first or last row 2018-12-04 20:49:33,472 INFO [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=2.16 KB heapSize=4.84 KB 2018-12-04 20:49:33,566 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741865_1041{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|FINALIZED]]} size 0 2018-12-04 20:49:33,568 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741865_1041 size 7218 2018-12-04 20:49:33,569 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741865_1041 size 7218 2018-12-04 20:49:33,569 INFO [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.16 KB at sequenceid=15 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/.tmp/cf/52bcedb2441f4e458788423ce2f9b1f6 2018-12-04 20:49:33,585 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/.tmp/cf/52bcedb2441f4e458788423ce2f9b1f6 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/52bcedb2441f4e458788423ce2f9b1f6 2018-12-04 20:49:33,623 INFO [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/52bcedb2441f4e458788423ce2f9b1f6, entries=33, sequenceid=15, filesize=7.0 K 2018-12-04 20:49:33,627 INFO [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.HRegion(2816): Finished flush of dataSize ~2.16 KB/2209, heapSize ~4.85 KB/4968, currentSize=0 B/0 for 5abac36fc00b7260425322877c1d024f in 156ms, sequenceid=15, compaction requested=false 2018-12-04 20:49:33,628 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.HRegion(2332): Flush status journal: Acquiring readlock on region at 1543956573470 Running coprocessor pre-flush hooks at 1543956573471 Obtaining lock to block concurrent updates at 1543956573472 Preparing flush snapshotting stores in 5abac36fc00b7260425322877c1d024f at 1543956573472 Finished memstore snapshotting testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f., syncing WAL and waiting on mvcc, flushsize=dataSize=2209, getHeapSize=4968, getOffHeapSize=0 at 1543956573472 Flushing stores of testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. at 1543956573507 Flushing cf: creating writer at 1543956573509 Flushing cf: appending metadata at 1543956573516 Flushing cf: closing flushed file at 1543956573516 Flushing cf: reopening flushed file at 1543956573587 Finished flush of dataSize ~2.16 KB/2209, heapSize ~4.85 KB/4968, currentSize=0 B/0 for 5abac36fc00b7260425322877c1d024f in 156ms, sequenceid=15, compaction requested=false at 1543956573627 Running post-flush coprocessor hooks at 1543956573627 Flush successful at 1543956573627 2018-12-04 20:49:33,628 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=51486] regionserver.StoreUtils(123): cannot split hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/cf/c49c415510d34e33b204433bd5297b6c because midkey is the same as first or last row 2018-12-04 20:49:33,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] assignment.SplitTableRegionProcedure(189): Splittable=true rit=OPEN, location=asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:49:33,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=44, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, parent=5abac36fc00b7260425322877c1d024f, daughterA=d676bdddf4e81cdb54f3e9490d06fd29, daughterB=3f5e75b6ba627790a0b71773578c4dce 2018-12-04 20:49:33,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=44, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, parent=5abac36fc00b7260425322877c1d024f, daughterA=d676bdddf4e81cdb54f3e9490d06fd29, daughterB=3f5e75b6ba627790a0b71773578c4dce 2018-12-04 20:49:33,836 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=0 size=0) from run queue because: queue is empty after polling out pid=44, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, parent=5abac36fc00b7260425322877c1d024f, daughterA=d676bdddf4e81cdb54f3e9490d06fd29, daughterB=3f5e75b6ba627790a0b71773578c4dce 2018-12-04 20:49:33,837 INFO [PEWorker-1] procedure.MasterProcedureScheduler(741): Took xlock for pid=44, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, parent=5abac36fc00b7260425322877c1d024f, daughterA=d676bdddf4e81cdb54f3e9490d06fd29, daughterB=3f5e75b6ba627790a0b71773578c4dce 2018-12-04 20:49:33,841 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] master.MasterRpcServices(1497): Client=jenkins//67.195.81.154 snapshot request for:{ ss=snaptb1-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:49:33,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] snapshot.SnapshotDescriptionUtils(266): Creation time not specified, setting to:1543956573841 (current time:1543956573841). 2018-12-04 20:49:33,842 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] zookeeper.ReadOnlyZKClient(139): Connect 0x2389caaa to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:49:33,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c2ab412, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:49:33,895 INFO [RS-EventLoopGroup-4-7] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:52392, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:49:33,897 INFO [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x2389caaa to localhost:64381 2018-12-04 20:49:33,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:49:33,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] snapshot.SnapshotManager(584): No existing snapshot, attempting snapshot... 2018-12-04 20:49:33,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] snapshot.SnapshotManager(632): Table enabled, starting distributed snapshot. 2018-12-04 20:49:34,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure2.ProcedureExecutor(1092): Stored pid=45, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:34,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736] procedure.MasterProcedureScheduler(356): Add TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=1 size=1) to run queue because: the exclusive lock is not held by anyone when adding pid=45, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, type=EXCLUSIVE 2018-12-04 20:49:34,079 DEBUG [PEWorker-13] procedure.MasterProcedureScheduler(366): Remove TableQueue(testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635, xlock=false sharedLock=1 size=1) from run queue because: no procedure can be executed 2018-12-04 20:49:34,776 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652] blockmanagement.BlockManager(3480): BLOCK* BlockManager: ask 127.0.0.1:60454 to delete [blk_1073741858_1034, blk_1073741859_1035, blk_1073741860_1036, blk_1073741861_1037, blk_1073741862_1038, blk_1073741863_1039] 2018-12-04 20:49:37,003 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-12-04 20:49:37,776 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652] blockmanagement.BlockManager(3480): BLOCK* BlockManager: ask 127.0.0.1:54375 to delete [blk_1073741858_1034, blk_1073741859_1035, blk_1073741860_1036, blk_1073741861_1037, blk_1073741862_1038, blk_1073741863_1039] 2018-12-04 20:49:46,407 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 12.5700sec 2018-12-04 20:49:51,408 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 17.5710sec 2018-12-04 20:49:56,409 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 22.5720sec 2018-12-04 20:50:01,410 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 27.5730sec 2018-12-04 20:50:06,410 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 32.5730sec 2018-12-04 20:50:11,411 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 37.5740sec 2018-12-04 20:50:16,412 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 42.5750sec 2018-12-04 20:50:21,413 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 47.5750sec 2018-12-04 20:50:26,414 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 52.5770sec 2018-12-04 20:50:31,416 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 57.5780sec 2018-12-04 20:50:34,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] master.MasterRpcServices(1497): Client=jenkins//67.195.81.154 snapshot request for:{ ss=snaptb1-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:50:34,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] snapshot.SnapshotDescriptionUtils(266): Creation time not specified, setting to:1543956634102 (current time:1543956634102). 2018-12-04 20:50:34,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] zookeeper.ReadOnlyZKClient(139): Connect 0x49c6068c to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:50:34,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1163ef3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:50:34,131 INFO [RS-EventLoopGroup-4-8] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:53968, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:50:34,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x49c6068c to localhost:64381 2018-12-04 20:50:34,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:50:34,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736] snapshot.SnapshotManager(584): No existing snapshot, attempting snapshot... 2018-12-04 20:50:36,416 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 2.579sec 2018-12-04 20:50:41,417 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 7.58sec 2018-12-04 20:50:46,418 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 12.581sec 2018-12-04 20:50:51,419 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 17.582sec 2018-12-04 20:50:56,419 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 22.582sec 2018-12-04 20:51:01,420 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 27.583sec 2018-12-04 20:51:06,421 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 32.584sec 2018-12-04 20:51:11,422 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 37.585sec 2018-12-04 20:51:16,423 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 42.586sec 2018-12-04 20:51:21,423 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 47.586sec 2018-12-04 20:51:26,424 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 52.586sec 2018-12-04 20:51:31,424 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 1mins, 57.587sec 2018-12-04 20:51:34,612 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] master.MasterRpcServices(1497): Client=jenkins//67.195.81.154 snapshot request for:{ ss=snaptb1-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:51:34,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotDescriptionUtils(266): Creation time not specified, setting to:1543956694613 (current time:1543956694613). 2018-12-04 20:51:34,614 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] zookeeper.ReadOnlyZKClient(139): Connect 0x4d7c7e09 to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:51:34,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3daa0244, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:51:34,652 INFO [RS-EventLoopGroup-4-9] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:55297, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:51:34,655 INFO [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x4d7c7e09 to localhost:64381 2018-12-04 20:51:34,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:51:34,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736] snapshot.SnapshotManager(584): No existing snapshot, attempting snapshot... 2018-12-04 20:51:36,425 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 2.588sec 2018-12-04 20:51:41,425 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 7.588sec 2018-12-04 20:51:46,426 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 12.589sec 2018-12-04 20:51:51,427 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 17.59sec 2018-12-04 20:51:56,428 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 22.591sec 2018-12-04 20:52:01,428 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 27.591sec 2018-12-04 20:52:06,428 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 32.591sec 2018-12-04 20:52:11,429 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 37.592sec 2018-12-04 20:52:16,429 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 42.592sec 2018-12-04 20:52:21,430 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 47.593sec 2018-12-04 20:52:26,430 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 52.593sec 2018-12-04 20:52:31,433 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 2mins, 57.596sec 2018-12-04 20:52:35,377 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] master.MasterRpcServices(1497): Client=jenkins//67.195.81.154 snapshot request for:{ ss=snaptb1-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:52:35,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotDescriptionUtils(266): Creation time not specified, setting to:1543956755377 (current time:1543956755377). 2018-12-04 20:52:35,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] zookeeper.ReadOnlyZKClient(139): Connect 0x003db56d to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:52:35,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18f3cc53, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:52:35,424 INFO [RS-EventLoopGroup-4-10] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:56478, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:52:35,426 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x003db56d to localhost:64381 2018-12-04 20:52:35,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:52:35,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736] snapshot.SnapshotManager(584): No existing snapshot, attempting snapshot... 2018-12-04 20:52:36,434 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 2.597sec 2018-12-04 20:52:41,434 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 7.597sec 2018-12-04 20:52:46,435 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 12.598sec 2018-12-04 20:52:51,435 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 17.598sec 2018-12-04 20:52:56,436 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 22.599sec 2018-12-04 20:53:01,436 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 27.599sec 2018-12-04 20:53:06,437 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 32.6sec 2018-12-04 20:53:11,438 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 37.6sec 2018-12-04 20:53:16,439 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 42.602sec 2018-12-04 20:53:21,440 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 47.603sec 2018-12-04 20:53:26,441 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 52.604sec 2018-12-04 20:53:31,441 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 3mins, 57.604sec 2018-12-04 20:53:36,441 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 2.604sec 2018-12-04 20:53:36,648 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736] master.MasterRpcServices(1497): Client=jenkins//67.195.81.154 snapshot request for:{ ss=snaptb1-1543956551635 table=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635 type=FLUSH } 2018-12-04 20:53:36,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736] snapshot.SnapshotDescriptionUtils(266): Creation time not specified, setting to:1543956816648 (current time:1543956816648). 2018-12-04 20:53:36,650 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736] zookeeper.ReadOnlyZKClient(139): Connect 0x1b9c5c8a to localhost:64381 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-12-04 20:53:36,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7909f4d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2018-12-04 20:53:36,721 INFO [RS-EventLoopGroup-4-11] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:59601, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:53:36,724 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x1b9c5c8a to localhost:64381 2018-12-04 20:53:36,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:53:36,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736] snapshot.SnapshotManager(584): No existing snapshot, attempting snapshot... 2018-12-04 20:53:41,442 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 7.605sec 2018-12-04 20:53:46,442 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 12.605sec 2018-12-04 20:53:51,443 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 17.606sec 2018-12-04 20:53:56,443 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 22.606sec 2018-12-04 20:53:59,122 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache(944): totalSize=784.55 KB, freeSize=994.83 MB, max=995.60 MB, blockCount=6, accesses=12, hits=0, hitRatio=0, cachingAccesses=6, cachingHits=0, cachingHitsRatio=0,evictions=30, evicted=0, evictedPerRun=0.0 2018-12-04 20:54:01,444 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 27.607sec 2018-12-04 20:54:01,857 DEBUG [RS:2;asf910:36011-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): data stats (chunk size=2097152): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-12-04 20:54:01,857 DEBUG [RS:1;asf910:51486-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): data stats (chunk size=2097152): current pool size=1, created chunk count=9, reused chunk count=6, reuseRatio=40.00% 2018-12-04 20:54:01,858 DEBUG [RS:2;asf910:36011-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-12-04 20:54:01,858 DEBUG [RS:1;asf910:51486-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-12-04 20:54:04,695 DEBUG [master/asf910:0.Chore.1] balancer.StochasticLoadBalancer(294): RegionReplicaHostCostFunction not needed 2018-12-04 20:54:04,695 DEBUG [master/asf910:0.Chore.1] balancer.StochasticLoadBalancer(294): RegionReplicaRackCostFunction not needed 2018-12-04 20:54:04,718 INFO [RS-EventLoopGroup-4-12] ipc.ServerRpcConnection(556): Connection from 67.195.81.154:60601, version=2.1.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2018-12-04 20:54:05,806 INFO [regionserver/asf910:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1768): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 222722 ms 2018-12-04 20:54:06,444 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 32.607sec 2018-12-04 20:54:08,216 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-12-04 20:54:08,815 INFO [regionserver/asf910:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1768): MemstoreFlusherChore requesting flush of hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. because info has an old edit so flush to free WALs after random delay 32275 ms 2018-12-04 20:54:11,444 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 37.607sec 2018-12-04 20:54:16,445 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 42.608sec 2018-12-04 20:54:21,446 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 47.609sec 2018-12-04 20:54:26,446 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 52.609sec 2018-12-04 20:54:31,489 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 4mins, 57.652sec 2018-12-04 20:54:36,490 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 2.653sec 2018-12-04 20:54:41,095 INFO [MemStoreFlusher.0] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=78 B heapSize=440 B 2018-12-04 20:54:41,128 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741866_1042{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:54:41,128 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741866_1042{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:54:41,128 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741866_1042{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:54:41,130 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/.tmp/info/5e0e6cc899fc42d299ba1dbb740b5bcc 2018-12-04 20:54:41,141 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/.tmp/info/5e0e6cc899fc42d299ba1dbb740b5bcc as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/info/5e0e6cc899fc42d299ba1dbb740b5bcc 2018-12-04 20:54:41,150 INFO [MemStoreFlusher.0] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/info/5e0e6cc899fc42d299ba1dbb740b5bcc, entries=2, sequenceid=6, filesize=4.8 K 2018-12-04 20:54:41,159 INFO [MemStoreFlusher.0] regionserver.HRegion(2816): Finished flush of dataSize ~78 B/78, heapSize ~448 B/448, currentSize=0 B/0 for 9ec9c1da4947b53085aaed5a2a3da06b in 64ms, sequenceid=6, compaction requested=false 2018-12-04 20:54:41,160 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2332): Flush status journal: Acquiring readlock on region at 1543956881094 Running coprocessor pre-flush hooks at 1543956881094 Obtaining lock to block concurrent updates at 1543956881095 Preparing flush snapshotting stores in 9ec9c1da4947b53085aaed5a2a3da06b at 1543956881095 Finished memstore snapshotting hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b., syncing WAL and waiting on mvcc, flushsize=dataSize=78, getHeapSize=448, getOffHeapSize=0 at 1543956881096 Flushing stores of hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. at 1543956881097 Flushing info: creating writer at 1543956881098 Flushing info: appending metadata at 1543956881110 Flushing info: closing flushed file at 1543956881110 Flushing info: reopening flushed file at 1543956881143 Finished flush of dataSize ~78 B/78, heapSize ~448 B/448, currentSize=0 B/0 for 9ec9c1da4947b53085aaed5a2a3da06b in 64ms, sequenceid=6, compaction requested=false at 1543956881160 Running post-flush coprocessor hooks at 1543956881160 Flush successful at 1543956881160 2018-12-04 20:54:41,490 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 7.653sec 2018-12-04 20:54:46,491 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 12.654sec 2018-12-04 20:54:51,491 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 17.654sec 2018-12-04 20:54:56,492 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 22.655sec 2018-12-04 20:55:01,577 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 27.74sec 2018-12-04 20:55:06,577 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 32.74sec 2018-12-04 20:55:11,578 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 37.741sec 2018-12-04 20:55:16,579 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 42.742sec 2018-12-04 20:55:21,579 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 47.742sec 2018-12-04 20:55:26,580 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 52.742sec 2018-12-04 20:55:31,580 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 5mins, 57.743sec 2018-12-04 20:55:36,581 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 2.744sec 2018-12-04 20:55:39,177 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection java.io.IOException: Call to asf910.gq1.ygridcore.net/67.195.81.154:53736 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=150, waitTime=60007, rpcTimeout=60000 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:185) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:199) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:663) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:738) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:466) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=150, waitTime=60007, rpcTimeout=60000 at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200) ... 4 more 2018-12-04 20:55:41,581 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 7.744sec 2018-12-04 20:55:46,581 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 12.744sec 2018-12-04 20:55:51,581 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 17.744sec 2018-12-04 20:55:56,582 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 22.745sec 2018-12-04 20:56:01,582 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 27.745sec 2018-12-04 20:56:06,583 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 32.746sec 2018-12-04 20:56:11,583 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 37.746sec 2018-12-04 20:56:16,583 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 42.746sec 2018-12-04 20:56:21,584 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 47.746sec 2018-12-04 20:56:26,584 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 52.747sec 2018-12-04 20:56:31,584 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 6mins, 57.747sec 2018-12-04 20:56:36,584 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 2.747sec 2018-12-04 20:56:41,584 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 7.747sec 2018-12-04 20:56:44,219 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.asyncSnapshot(HBaseAdmin.java:2579) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2529) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2520) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2513) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientAfterSplittingRegionsTestBase.testRestoreSnapshotAfterSplittingRegions(RestoreSnapshotFromClientAfterSplittingRegionsTestBase.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:44,270 DEBUG [Time-limited test] client.RpcRetryingCallerImpl(131): Call exception, tries=6, retries=7, started=430428 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ?, details=, exception=org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1175) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1234) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.asyncSnapshot(HBaseAdmin.java:2579) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2529) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2520) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2513) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientAfterSplittingRegionsTestBase.testRestoreSnapshotAfterSplittingRegions(RestoreSnapshotFromClientAfterSplittingRegionsTestBase.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionImplementation.java:1125) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1153) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1169) ... 45 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:44,273 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:44,530 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:45,039 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:45,798 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:46,585 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 12.748sec 2018-12-04 20:56:47,053 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:49,579 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:51,585 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 17.748sec 2018-12-04 20:56:54,631 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:54,637 DEBUG [Time-limited test] client.RpcRetryingCallerImpl(131): Call exception, tries=6, retries=7, started=10364 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ?, details=, exception=org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1175) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1234) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionImplementation.java:1125) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1153) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1169) ... 42 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:54,705 INFO [Time-limited test] hbase.ResourceChecker(172): after: client.TestRestoreSnapshotFromClientAfterSplittingRegions#testRestoreSnapshotAfterSplittingRegions[0: regionReplication=1] Thread=427 (was 392) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-5 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Default-IPC-NioEventLoopGroup-7-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-6 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Default-IPC-NioEventLoopGroup-7-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-10 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-11 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-8 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-1-5 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-4 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: MASTER_TABLE_OPERATIONS-master/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Default-IPC-NioEventLoopGroup-7-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-12 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-9 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-4-7 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_CLOSE_REGION-regionserver/asf910:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/asf910:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/asf910:0-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) - Thread LEAK? -, OpenFileDescriptor=1538 (was 1593), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=1534 (was 1008) - SystemLoadAverage LEAK? -, ProcessCount=302 (was 302), AvailableMemoryMB=11041 (was 13040) 2018-12-04 20:56:54,707 WARN [Time-limited test] hbase.ResourceChecker(135): OpenFileDescriptor=1538 is superior to 1024 2018-12-04 20:56:54,747 INFO [Time-limited test] hbase.ResourceChecker(148): before: client.TestRestoreSnapshotFromClientAfterSplittingRegions#testRestoreSnapshotAfterSplittingRegions[1: regionReplication=3] Thread=427, OpenFileDescriptor=1538, MaxFileDescriptor=60000, SystemLoadAverage=1534, ProcessCount=302, AvailableMemoryMB=11402 2018-12-04 20:56:54,748 WARN [Time-limited test] hbase.ResourceChecker(135): OpenFileDescriptor=1538 is superior to 1024 2018-12-04 20:56:54,749 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:55,004 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:55,514 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:56,277 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:56:56,586 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 22.749sec 2018-12-04 20:56:57,541 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:00,066 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:01,586 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 27.749sec 2018-12-04 20:57:05,097 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:05,107 DEBUG [Time-limited test] client.RpcRetryingCallerImpl(131): Call exception, tries=6, retries=7, started=10355 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ?, details=, exception=org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1175) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1234) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1468) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:778) at org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.createTable(SnapshotTestingUtils.java:806) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.createTable(RestoreSnapshotFromClientTestBase.java:119) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.setup(RestoreSnapshotFromClientTestBase.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionImplementation.java:1125) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1153) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1169) ... 46 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:05,110 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:05,366 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:05,888 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:06,586 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 32.749sec 2018-12-04 20:57:06,649 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:07,911 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:10,430 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:11,586 WARN [ProcExecTimeout] procedure2.ProcedureExecutor$WorkerMonitor(2147): Worker stuck PEWorker-1(pid=44), run time 7mins, 37.749sec 2018-12-04 20:57:15,457 WARN [Time-limited test] client.ConnectionImplementation(1759): Checking master connection org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionImplementation.java:1032) at org.apache.hadoop.hbase.client.ConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionImplementation.java:1752) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1232) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:15,463 DEBUG [Time-limited test] client.RpcRetryingCallerImpl(131): Call exception, tries=6, retries=7, started=10354 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ?, details=, exception=org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1175) at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1234) at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1223) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmin.java:912) at org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:906) at org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1701) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDown(RestoreSnapshotFromClientTestBase.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.CallQueueTooBigException: Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionImplementation.java:1125) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1153) at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1169) ... 42 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on asf910.gq1.ygridcore.net,53736,1543956537196, too many items queued ? at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more 2018-12-04 20:57:15,516 INFO [Time-limited test] hbase.ResourceChecker(172): after: client.TestRestoreSnapshotFromClientAfterSplittingRegions#testRestoreSnapshotAfterSplittingRegions[1: regionReplication=3] Thread=430 (was 427) - Thread LEAK? -, OpenFileDescriptor=1541 (was 1538) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=1516 (was 1534), ProcessCount=303 (was 302) - ProcessCount LEAK? -, AvailableMemoryMB=10638 (was 11402) 2018-12-04 20:57:15,516 WARN [Time-limited test] hbase.ResourceChecker(135): OpenFileDescriptor=1541 is superior to 1024 2018-12-04 20:57:15,517 INFO [Time-limited test] hbase.HBaseTestingUtility(1104): Shutting down minicluster 2018-12-04 20:57:15,517 INFO [Time-limited test] client.ConnectionImplementation(1775): Closing master protocol: MasterService 2018-12-04 20:57:15,517 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x48feb89f to localhost:64381 2018-12-04 20:57:15,518 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:15,518 DEBUG [Time-limited test] util.JVMClusterUtil(247): Shutting down HBase Cluster 2018-12-04 20:57:15,520 DEBUG [Time-limited test] util.JVMClusterUtil(267): Found active master hash=583695429, stopped=false 2018-12-04 20:57:15,520 INFO [Time-limited test] master.ServerManager(901): Cluster shutdown requested of master=asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:57:15,550 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2018-12-04 20:57:15,550 INFO [Time-limited test] procedure2.ProcedureExecutor(683): Stopping 2018-12-04 20:57:15,550 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2018-12-04 20:57:15,550 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2018-12-04 20:57:15,550 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2018-12-04 20:57:15,551 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:57:15,550 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x3366ddb3 to localhost:64381 2018-12-04 20:57:15,552 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:57:15,552 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:57:15,552 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2018-12-04 20:57:15,552 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:15,553 INFO [RS:0;asf910:34504] regionserver.HRegionServer(988): Closing user regions 2018-12-04 20:57:15,553 INFO [Time-limited test] regionserver.HRegionServer(2138): ***** STOPPING region server 'asf910.gq1.ygridcore.net,34504,1543956539068' ***** 2018-12-04 20:57:15,553 INFO [Time-limited test] regionserver.HRegionServer(2152): STOPPED: Shutdown requested 2018-12-04 20:57:15,553 INFO [Time-limited test] regionserver.HRegionServer(2138): ***** STOPPING region server 'asf910.gq1.ygridcore.net,51486,1543956539203' ***** 2018-12-04 20:57:15,553 INFO [Time-limited test] regionserver.HRegionServer(2152): STOPPED: Shutdown requested 2018-12-04 20:57:15,554 INFO [Time-limited test] regionserver.HRegionServer(2138): ***** STOPPING region server 'asf910.gq1.ygridcore.net,36011,1543956539302' ***** 2018-12-04 20:57:15,556 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1541): Closing 3694f6258e9e47dea826bcb208d58324, disabling compactions & flushes 2018-12-04 20:57:15,555 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1541): Closing f54fb87a834cb50fd2027cf50bec8dde, disabling compactions & flushes 2018-12-04 20:57:15,555 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing 9ec9c1da4947b53085aaed5a2a3da06b, disabling compactions & flushes 2018-12-04 20:57:15,556 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:57:15,556 INFO [Time-limited test] regionserver.HRegionServer(2152): STOPPED: Shutdown requested 2018-12-04 20:57:15,557 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:57:15,557 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=22.50 KB heapSize=48.58 KB 2018-12-04 20:57:15,557 INFO [RS:2;asf910:36011] regionserver.SplitLogWorker(166): Sending interrupt to stop the worker thread 2018-12-04 20:57:15,557 INFO [RS:0;asf910:34504] regionserver.SplitLogWorker(166): Sending interrupt to stop the worker thread 2018-12-04 20:57:15,557 INFO [RS:1;asf910:51486] regionserver.SplitLogWorker(166): Sending interrupt to stop the worker thread 2018-12-04 20:57:15,557 INFO [SplitLogWorker-asf910:36011] regionserver.SplitLogWorker(148): SplitLogWorker interrupted. Exiting. 2018-12-04 20:57:15,557 INFO [SplitLogWorker-asf910:34504] regionserver.SplitLogWorker(148): SplitLogWorker interrupted. Exiting. 2018-12-04 20:57:15,558 INFO [SplitLogWorker-asf910:34504] regionserver.SplitLogWorker(157): SplitLogWorker asf910.gq1.ygridcore.net,34504,1543956539068 exiting 2018-12-04 20:57:15,557 INFO [SplitLogWorker-asf910:36011] regionserver.SplitLogWorker(157): SplitLogWorker asf910.gq1.ygridcore.net,36011,1543956539302 exiting 2018-12-04 20:57:15,558 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:57:15,558 INFO [SplitLogWorker-asf910:51486] regionserver.SplitLogWorker(148): SplitLogWorker interrupted. Exiting. 2018-12-04 20:57:15,558 INFO [RS:0;asf910:34504] regionserver.HeapMemoryManager(221): Stopping 2018-12-04 20:57:15,558 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=1.44 KB heapSize=3.30 KB 2018-12-04 20:57:15,558 INFO [RS:2;asf910:36011] regionserver.HeapMemoryManager(221): Stopping 2018-12-04 20:57:15,559 INFO [RS:0;asf910:34504] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-12-04 20:57:15,559 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(382): MemStoreFlusher.0 exiting 2018-12-04 20:57:15,559 INFO [RS:0;asf910:34504] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-12-04 20:57:15,559 INFO [RS:0;asf910:34504] regionserver.HRegionServer(1079): stopping server asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:57:15,558 INFO [RS:1;asf910:51486] regionserver.HeapMemoryManager(221): Stopping 2018-12-04 20:57:15,558 INFO [SplitLogWorker-asf910:51486] regionserver.SplitLogWorker(157): SplitLogWorker asf910.gq1.ygridcore.net,51486,1543956539203 exiting 2018-12-04 20:57:15,559 INFO [RS:1;asf910:51486] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-12-04 20:57:15,561 INFO [RS:1;asf910:51486] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-12-04 20:57:15,562 INFO [RS:1;asf910:51486] regionserver.HRegionServer(1079): stopping server asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:57:15,562 DEBUG [RS:1;asf910:51486] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-12-04 20:57:15,559 DEBUG [RS:0;asf910:34504] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-12-04 20:57:15,562 INFO [RS:0;asf910:34504] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x30c17016 to localhost:64381 2018-12-04 20:57:15,562 DEBUG [RS:0;asf910:34504] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:15,559 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(382): MemStoreFlusher.0 exiting 2018-12-04 20:57:15,559 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(382): MemStoreFlusher.1 exiting 2018-12-04 20:57:15,559 INFO [RS:2;asf910:36011] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-12-04 20:57:15,564 INFO [RS:2;asf910:36011] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-12-04 20:57:15,564 INFO [RS:2;asf910:36011] regionserver.HRegionServer(1079): stopping server asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:57:15,564 DEBUG [RS:2;asf910:36011] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-12-04 20:57:15,564 INFO [RS:1;asf910:51486] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x0a5c979d to localhost:64381 2018-12-04 20:57:15,566 DEBUG [RS:1;asf910:51486] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:15,563 INFO [RS:0;asf910:34504] regionserver.HRegionServer(1384): Waiting on 3 regions to close 2018-12-04 20:57:15,571 DEBUG [RS:0;asf910:34504] regionserver.HRegionServer(1388): Online Regions={f54fb87a834cb50fd2027cf50bec8dde=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde., 3694f6258e9e47dea826bcb208d58324=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324., 9ec9c1da4947b53085aaed5a2a3da06b=hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b.} 2018-12-04 20:57:15,561 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(382): MemStoreFlusher.1 exiting 2018-12-04 20:57:15,561 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(382): MemStoreFlusher.0 exiting 2018-12-04 20:57:15,561 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(382): MemStoreFlusher.1 exiting 2018-12-04 20:57:15,568 INFO [RS:1;asf910:51486] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-12-04 20:57:15,571 INFO [RS:1;asf910:51486] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-12-04 20:57:15,566 INFO [RS:2;asf910:36011] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x1dadac8c to localhost:64381 2018-12-04 20:57:15,566 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing eea7db479f05d0bfd00980b44810efbb, disabling compactions & flushes 2018-12-04 20:57:15,578 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:57:15,566 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1541): Closing 0cbbdc66f0b53e014d4b09cb9f965d90, disabling compactions & flushes 2018-12-04 20:57:15,564 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1541): Closing 5abac36fc00b7260425322877c1d024f, disabling compactions & flushes 2018-12-04 20:57:15,564 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing 17bf706db6019b3980612acaaf29410d, disabling compactions & flushes 2018-12-04 20:57:15,581 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:57:15,580 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:57:15,581 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=2.29 KB heapSize=5.13 KB 2018-12-04 20:57:15,580 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=1.90 KB heapSize=4.28 KB 2018-12-04 20:57:15,578 DEBUG [RS:2;asf910:36011] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:15,571 INFO [RS:1;asf910:51486] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-12-04 20:57:15,582 INFO [RS:2;asf910:36011] regionserver.HRegionServer(1384): Waiting on 2 regions to close 2018-12-04 20:57:15,583 DEBUG [RS:2;asf910:36011] regionserver.HRegionServer(1388): Online Regions={0cbbdc66f0b53e014d4b09cb9f965d90=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90., eea7db479f05d0bfd00980b44810efbb=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb.} 2018-12-04 20:57:15,581 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:57:15,583 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2617): Flushing 1/1 column families, dataSize=2.42 KB heapSize=5.41 KB 2018-12-04 20:57:15,583 INFO [RS:1;asf910:51486] regionserver.HRegionServer(1384): Waiting on 3 regions to close 2018-12-04 20:57:15,583 DEBUG [RS:1;asf910:51486] regionserver.HRegionServer(1388): Online Regions={17bf706db6019b3980612acaaf29410d=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d., 1588230740=hbase:meta,,1.1588230740, 5abac36fc00b7260425322877c1d024f=testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f.} 2018-12-04 20:57:15,587 DEBUG [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HRegion(1541): Closing 1588230740, disabling compactions & flushes 2018-12-04 20:57:15,591 DEBUG [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HRegion(1581): Updates disabled for region hbase:meta,,1.1588230740 2018-12-04 20:57:15,591 INFO [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HRegion(2617): Flushing 3/3 column families, dataSize=45.84 KB heapSize=64.01 KB 2018-12-04 20:57:15,600 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase/namespace/9ec9c1da4947b53085aaed5a2a3da06b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2018-12-04 20:57:15,605 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:57:15,605 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed hbase:namespace,,1543956544546.9ec9c1da4947b53085aaed5a2a3da06b. 2018-12-04 20:57:15,634 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/5abac36fc00b7260425322877c1d024f/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=11 2018-12-04 20:57:15,651 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:57:15,651 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,,1543956551676.5abac36fc00b7260425322877c1d024f. 2018-12-04 20:57:15,658 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741867_1043{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:57:15,659 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741868_1044{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 28816 2018-12-04 20:57:15,659 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741868_1044 size 28816 2018-12-04 20:57:15,659 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741868_1044 size 28816 2018-12-04 20:57:15,661 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741867_1043 size 6438 2018-12-04 20:57:15,661 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=1.44 KB at sequenceid=15 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/.tmp/cf/32fb33359c2b4a1c9e3ac59b43393fe3 2018-12-04 20:57:15,669 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741867_1043 size 6438 2018-12-04 20:57:15,674 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741872_1048{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|FINALIZED]]} size 0 2018-12-04 20:57:15,675 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741869_1045{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:57:15,678 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741871_1047{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:15,682 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741869_1045{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:57:15,683 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741872_1048 size 27687 2018-12-04 20:57:15,683 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741872_1048 size 27687 2018-12-04 20:57:15,683 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741871_1047{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:15,684 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741871_1047{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:15,685 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.29 KB at sequenceid=15 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/.tmp/cf/43fd6b23549243c8905b2df2845d1996 2018-12-04 20:57:15,693 INFO [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=44.67 KB at sequenceid=80 (bloomFilter=false), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/.tmp/info/1ff1757397d2428480ffe89e58b450eb 2018-12-04 20:57:15,694 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741869_1045 size 7490 2018-12-04 20:57:15,695 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.42 KB at sequenceid=15 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/.tmp/cf/c74a30a1325a4806a91340ae87eeae0c 2018-12-04 20:57:15,697 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/.tmp/cf/43fd6b23549243c8905b2df2845d1996 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf/43fd6b23549243c8905b2df2845d1996 2018-12-04 20:57:15,699 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741870_1046{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:15,703 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741870_1046{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:15,706 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741870_1046 size 6946 2018-12-04 20:57:15,709 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=1.90 KB at sequenceid=15 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/.tmp/cf/073fff3eb61d4afd818fc81884462b09 2018-12-04 20:57:15,709 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/.tmp/cf/32fb33359c2b4a1c9e3ac59b43393fe3 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf/32fb33359c2b4a1c9e3ac59b43393fe3 2018-12-04 20:57:15,717 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/cf/43fd6b23549243c8905b2df2845d1996, entries=35, sequenceid=15, filesize=7.2 K 2018-12-04 20:57:15,721 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/.tmp/cf/c74a30a1325a4806a91340ae87eeae0c as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf/c74a30a1325a4806a91340ae87eeae0c 2018-12-04 20:57:15,724 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(2816): Finished flush of dataSize ~2.29 KB/2343, heapSize ~5.13 KB/5256, currentSize=0 B/0 for 0cbbdc66f0b53e014d4b09cb9f965d90 in 142ms, sequenceid=15, compaction requested=false 2018-12-04 20:57:15,729 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/.tmp/cf/073fff3eb61d4afd818fc81884462b09 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf/073fff3eb61d4afd818fc81884462b09 2018-12-04 20:57:15,740 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/0cbbdc66f0b53e014d4b09cb9f965d90/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=11 2018-12-04 20:57:15,741 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/cf/c74a30a1325a4806a91340ae87eeae0c, entries=37, sequenceid=15, filesize=7.3 K 2018-12-04 20:57:15,742 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741873_1049{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:57:15,744 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:57:15,745 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,4,1543956551676.0cbbdc66f0b53e014d4b09cb9f965d90. 2018-12-04 20:57:15,748 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741873_1049{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:57:15,763 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2816): Finished flush of dataSize ~2.42 KB/2477, heapSize ~5.41 KB/5544, currentSize=0 B/0 for 17bf706db6019b3980612acaaf29410d in 179ms, sequenceid=15, compaction requested=false 2018-12-04 20:57:15,769 INFO [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=1.17 KB at sequenceid=80 (bloomFilter=false), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/.tmp/table/693f4beefd0341338402ef119d29779c 2018-12-04 20:57:15,770 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741873_1049 size 5146 2018-12-04 20:57:15,770 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/cf/32fb33359c2b4a1c9e3ac59b43393fe3, entries=22, sequenceid=15, filesize=6.3 K 2018-12-04 20:57:15,773 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/cf/073fff3eb61d4afd818fc81884462b09, entries=29, sequenceid=15, filesize=6.8 K 2018-12-04 20:57:15,775 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(2816): Finished flush of dataSize ~1.44 KB/1472, heapSize ~3.30 KB/3384, currentSize=0 B/0 for f54fb87a834cb50fd2027cf50bec8dde in 217ms, sequenceid=15, compaction requested=false 2018-12-04 20:57:15,778 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(2816): Finished flush of dataSize ~1.90 KB/1941, heapSize ~4.29 KB/4392, currentSize=0 B/0 for eea7db479f05d0bfd00980b44810efbb in 200ms, sequenceid=15, compaction requested=false 2018-12-04 20:57:15,783 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/17bf706db6019b3980612acaaf29410d/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=11 2018-12-04 20:57:15,786 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:57:15,786 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,1,1543956551676.17bf706db6019b3980612acaaf29410d. 2018-12-04 20:57:15,791 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/f54fb87a834cb50fd2027cf50bec8dde/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=11 2018-12-04 20:57:15,795 INFO [RS_CLOSE_REGION-regionserver/asf910:0-1] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:57:15,795 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-1] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,3,1543956551676.f54fb87a834cb50fd2027cf50bec8dde. 2018-12-04 20:57:15,799 DEBUG [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/.tmp/info/1ff1757397d2428480ffe89e58b450eb as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/info/1ff1757397d2428480ffe89e58b450eb 2018-12-04 20:57:15,815 INFO [regionserver/asf910:0.Chore.1] hbase.ScheduledChore(180): Chore: CompactionChecker was stopped 2018-12-04 20:57:15,815 INFO [regionserver/asf910:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-12-04 20:57:15,816 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/eea7db479f05d0bfd00980b44810efbb/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=11 2018-12-04 20:57:15,817 INFO [regionserver/asf910:0.Chore.1] hbase.ScheduledChore(180): Chore: CompactionChecker was stopped 2018-12-04 20:57:15,817 INFO [regionserver/asf910:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-12-04 20:57:15,818 INFO [RS_CLOSE_REGION-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:57:15,819 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,2,1543956551676.eea7db479f05d0bfd00980b44810efbb. 2018-12-04 20:57:15,846 INFO [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/info/1ff1757397d2428480ffe89e58b450eb, entries=118, sequenceid=80, filesize=27.0 K 2018-12-04 20:57:15,854 DEBUG [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/.tmp/table/693f4beefd0341338402ef119d29779c as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/table/693f4beefd0341338402ef119d29779c 2018-12-04 20:57:15,877 INFO [regionserver/asf910:0.Chore.1] hbase.ScheduledChore(180): Chore: CompactionChecker was stopped 2018-12-04 20:57:15,877 INFO [regionserver/asf910:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-12-04 20:57:15,912 INFO [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/hbase/meta/1588230740/table/693f4beefd0341338402ef119d29779c, entries=5, sequenceid=80, filesize=5.0 K 2018-12-04 20:57:15,922 INFO [regionserver/asf910:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-12-04 20:57:15,922 INFO [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HRegion(2816): Finished flush of dataSize ~45.84 KB/46940, heapSize ~63.82 KB/65352, currentSize=0 B/0 for 1588230740 in 331ms, sequenceid=80, compaction requested=false 2018-12-04 20:57:15,929 INFO [regionserver/asf910:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-12-04 20:57:15,939 DEBUG [RS_CLOSE_META-regionserver/asf910:0-0] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/hbase/meta/1588230740/recovered.edits/83.seqid, newMaxSeqId=83, maxSeqId=1 2018-12-04 20:57:15,941 DEBUG [RS_CLOSE_META-regionserver/asf910:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-12-04 20:57:15,944 INFO [RS_CLOSE_META-regionserver/asf910:0-0] regionserver.HRegion(1698): Closed hbase:meta,,1.1588230740 2018-12-04 20:57:15,944 DEBUG [RS_CLOSE_META-regionserver/asf910:0-0] handler.CloseRegionHandler(124): Closed hbase:meta,,1.1588230740 2018-12-04 20:57:15,949 INFO [regionserver/asf910:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-12-04 20:57:15,984 INFO [RS:2;asf910:36011] regionserver.HRegionServer(1107): stopping server asf910.gq1.ygridcore.net,36011,1543956539302; all regions closed. 2018-12-04 20:57:15,989 INFO [RS:1;asf910:51486] regionserver.HRegionServer(1107): stopping server asf910.gq1.ygridcore.net,51486,1543956539203; all regions closed. 2018-12-04 20:57:15,994 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:15,994 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:15,995 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:16,001 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:16,001 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:16,002 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-e5e4b851-a625-4939-b76b-08e33db5384e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-f75f33d5-a111-46c7-9ccb-b1e5e0d32c7d:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:16,013 DEBUG [RS:1;asf910:51486] wal.AbstractFSWAL(847): Moved 1 WAL file(s) to /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:57:16,013 DEBUG [RS:2;asf910:36011] wal.AbstractFSWAL(847): Moved 1 WAL file(s) to /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:57:16,013 INFO [RS:1;asf910:51486] wal.AbstractFSWAL(850): Closed WAL: AsyncFSWAL asf910.gq1.ygridcore.net%2C51486%2C1543956539203.meta:.meta(num 1543956542592) 2018-12-04 20:57:16,014 INFO [RS:2;asf910:36011] wal.AbstractFSWAL(850): Closed WAL: AsyncFSWAL asf910.gq1.ygridcore.net%2C36011%2C1543956539302:(num 1543956542083) 2018-12-04 20:57:16,014 DEBUG [RS:2;asf910:36011] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:16,014 INFO [RS:2;asf910:36011] regionserver.Leases(149): Closed leases 2018-12-04 20:57:16,015 INFO [RS:2;asf910:36011] hbase.ChoreService(327): Chore service for: regionserver/asf910:0 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-12-04 20:57:16,015 INFO [RS:2;asf910:36011] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-12-04 20:57:16,015 INFO [RS:2;asf910:36011] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-12-04 20:57:16,015 INFO [regionserver/asf910:0.logRoller] regionserver.LogRoller(212): LogRoller exiting. 2018-12-04 20:57:16,015 INFO [RS:2;asf910:36011] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-12-04 20:57:16,023 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:57:16,024 INFO [RS:2;asf910:36011] ipc.NettyRpcServer(144): Stopping server on /67.195.81.154:36011 2018-12-04 20:57:16,024 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:57:16,024 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-13eb77f1-f887-4435-855a-29c30e684eaa:NORMAL:127.0.0.1:60454|RBW]]} size 0 2018-12-04 20:57:16,032 DEBUG [RS:1;asf910:51486] wal.AbstractFSWAL(847): Moved 1 WAL file(s) to /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:57:16,032 INFO [RS:1;asf910:51486] wal.AbstractFSWAL(850): Closed WAL: AsyncFSWAL asf910.gq1.ygridcore.net%2C51486%2C1543956539203:(num 1543956542083) 2018-12-04 20:57:16,032 DEBUG [RS:1;asf910:51486] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:16,032 INFO [RS:1;asf910:51486] regionserver.Leases(149): Closed leases 2018-12-04 20:57:16,036 INFO [RS:1;asf910:51486] hbase.ChoreService(327): Chore service for: regionserver/asf910:0 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-12-04 20:57:16,037 INFO [regionserver/asf910:0.logRoller] regionserver.LogRoller(212): LogRoller exiting. 2018-12-04 20:57:16,044 INFO [RS:1;asf910:51486] ipc.NettyRpcServer(144): Stopping server on /67.195.81.154:51486 2018-12-04 20:57:16,060 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:57:16,060 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2018-12-04 20:57:16,060 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:57:16,060 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2018-12-04 20:57:16,060 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2018-12-04 20:57:16,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,36011,1543956539302 2018-12-04 20:57:16,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2018-12-04 20:57:16,063 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=22.50 KB at sequenceid=15 (bloomFilter=true), to=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/.tmp/cf/7bb730c5db91436f96cc33adce26fce3 2018-12-04 20:57:16,076 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/.tmp/cf/7bb730c5db91436f96cc33adce26fce3 as hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf/7bb730c5db91436f96cc33adce26fce3 2018-12-04 20:57:16,092 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:36011-0x1677afb1afa0003, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:57:16,092 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:51486-0x1677afb1afa0002, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:57:16,092 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,51486,1543956539203 2018-12-04 20:57:16,093 INFO [RegionServerTracker-0] master.RegionServerTracker(171): RegionServer ephemeral node deleted, processing expiration [asf910.gq1.ygridcore.net,51486,1543956539203] 2018-12-04 20:57:16,093 DEBUG [RegionServerTracker-0] master.DeadServer(143): Added asf910.gq1.ygridcore.net,51486,1543956539203; numProcessing=1 2018-12-04 20:57:16,093 INFO [RegionServerTracker-0] master.ServerManager(579): Cluster shutdown set; asf910.gq1.ygridcore.net,51486,1543956539203 expired; onlineServers=2 2018-12-04 20:57:16,093 INFO [RegionServerTracker-0] master.RegionServerTracker(171): RegionServer ephemeral node deleted, processing expiration [asf910.gq1.ygridcore.net,36011,1543956539302] 2018-12-04 20:57:16,093 DEBUG [RegionServerTracker-0] master.DeadServer(143): Added asf910.gq1.ygridcore.net,36011,1543956539302; numProcessing=2 2018-12-04 20:57:16,093 INFO [RegionServerTracker-0] master.ServerManager(579): Cluster shutdown set; asf910.gq1.ygridcore.net,36011,1543956539302 expired; onlineServers=1 2018-12-04 20:57:16,112 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HStore(1074): Added hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/data/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/cf/7bb730c5db91436f96cc33adce26fce3, entries=344, sequenceid=15, filesize=28.1 K 2018-12-04 20:57:16,115 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(2816): Finished flush of dataSize ~22.50 KB/23044, heapSize ~48.59 KB/49752, currentSize=0 B/0 for 3694f6258e9e47dea826bcb208d58324 in 558ms, sequenceid=15, compaction requested=false 2018-12-04 20:57:16,119 INFO [RS:2;asf910:36011] regionserver.HRegionServer(1154): Exiting; stopping=asf910.gq1.ygridcore.net,36011,1543956539302; zookeeper connection closed. 2018-12-04 20:57:16,120 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4f3fd703] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4f3fd703 2018-12-04 20:57:16,124 INFO [RS:1;asf910:51486] regionserver.HRegionServer(1154): Exiting; stopping=asf910.gq1.ygridcore.net,51486,1543956539203; zookeeper connection closed. 2018-12-04 20:57:16,125 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@551acdf8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@551acdf8 2018-12-04 20:57:16,126 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] wal.WALSplitter(695): Wrote file=hdfs://localhost:45471/user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/default/testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635/3694f6258e9e47dea826bcb208d58324/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=11 2018-12-04 20:57:16,174 INFO [RS_CLOSE_REGION-regionserver/asf910:0-2] regionserver.HRegion(1698): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:57:16,174 DEBUG [RS_CLOSE_REGION-regionserver/asf910:0-2] handler.CloseRegionHandler(124): Closed testRestoreSnapshotAfterSplittingRegions_0__regionReplication_1_-1543956551635,5,1543956551676.3694f6258e9e47dea826bcb208d58324. 2018-12-04 20:57:16,372 INFO [RS:0;asf910:34504] regionserver.HRegionServer(1107): stopping server asf910.gq1.ygridcore.net,34504,1543956539068; all regions closed. 2018-12-04 20:57:16,378 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:16,379 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54375 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:16,379 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:60454 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5f235008-470b-44c0-8f58-8abc282f11fb:NORMAL:127.0.0.1:60454|RBW], ReplicaUC[[DISK]DS-1db60017-9ad1-4de0-aa53-b88332f13b9e:NORMAL:127.0.0.1:54375|RBW], ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 0 2018-12-04 20:57:16,383 DEBUG [RS:0;asf910:34504] wal.AbstractFSWAL(847): Moved 1 WAL file(s) to /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/oldWALs 2018-12-04 20:57:16,383 INFO [RS:0;asf910:34504] wal.AbstractFSWAL(850): Closed WAL: AsyncFSWAL asf910.gq1.ygridcore.net%2C34504%2C1543956539068:(num 1543956542083) 2018-12-04 20:57:16,383 DEBUG [RS:0;asf910:34504] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:16,383 INFO [RS:0;asf910:34504] regionserver.Leases(149): Closed leases 2018-12-04 20:57:16,383 INFO [RS:0;asf910:34504] hbase.ChoreService(327): Chore service for: regionserver/asf910:0 had [[ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2018-12-04 20:57:16,384 INFO [RS:0;asf910:34504] regionserver.CompactSplit(394): Waiting for Split Thread to finish... 2018-12-04 20:57:16,384 INFO [regionserver/asf910:0.logRoller] regionserver.LogRoller(212): LogRoller exiting. 2018-12-04 20:57:16,384 INFO [RS:0;asf910:34504] regionserver.CompactSplit(394): Waiting for Large Compaction Thread to finish... 2018-12-04 20:57:16,384 INFO [RS:0;asf910:34504] regionserver.CompactSplit(394): Waiting for Small Compaction Thread to finish... 2018-12-04 20:57:16,386 INFO [RS:0;asf910:34504] ipc.NettyRpcServer(144): Stopping server on /67.195.81.154:34504 2018-12-04 20:57:16,425 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2018-12-04 20:57:16,425 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:34504-0x1677afb1afa0001, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,34504,1543956539068 2018-12-04 20:57:16,458 INFO [RS:0;asf910:34504] regionserver.HRegionServer(1154): Exiting; stopping=asf910.gq1.ygridcore.net,34504,1543956539068; zookeeper connection closed. 2018-12-04 20:57:16,458 INFO [RegionServerTracker-0] master.RegionServerTracker(171): RegionServer ephemeral node deleted, processing expiration [asf910.gq1.ygridcore.net,34504,1543956539068] 2018-12-04 20:57:16,458 DEBUG [RegionServerTracker-0] master.DeadServer(143): Added asf910.gq1.ygridcore.net,34504,1543956539068; numProcessing=3 2018-12-04 20:57:16,458 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@691393a6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(221): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@691393a6 2018-12-04 20:57:16,458 INFO [RegionServerTracker-0] master.ServerManager(579): Cluster shutdown set; asf910.gq1.ygridcore.net,34504,1543956539068 expired; onlineServers=0 2018-12-04 20:57:16,459 INFO [RegionServerTracker-0] regionserver.HRegionServer(2138): ***** STOPPING region server 'asf910.gq1.ygridcore.net,53736,1543956537196' ***** 2018-12-04 20:57:16,459 INFO [RegionServerTracker-0] regionserver.HRegionServer(2152): STOPPED: Cluster shutdown set; onlineServer=0 2018-12-04 20:57:16,459 INFO [Time-limited test] util.JVMClusterUtil(345): Shutdown of 1 master(s) and 3 regionserver(s) complete 2018-12-04 20:57:16,461 DEBUG [M:0;asf910:53736] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6aa4f3a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=asf910.gq1.ygridcore.net/67.195.81.154:0 2018-12-04 20:57:16,461 DEBUG [M:0;asf910:53736] regionserver.HRegionServer(947): About to register with Master. 2018-12-04 20:57:16,461 INFO [M:0;asf910:53736] regionserver.HRegionServer(1079): stopping server asf910.gq1.ygridcore.net,53736,1543956537196 2018-12-04 20:57:16,461 DEBUG [M:0;asf910:53736] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-12-04 20:57:16,462 INFO [M:0;asf910:53736] regionserver.HRegionServer(1107): stopping server asf910.gq1.ygridcore.net,53736,1543956537196; all regions closed. 2018-12-04 20:57:16,462 DEBUG [M:0;asf910:53736] ipc.AbstractRpcClient(483): Stopping rpc client 2018-12-04 20:57:16,462 INFO [M:0;asf910:53736] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2018-12-04 20:57:16,462 INFO [M:0;asf910:53736] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2018-12-04 20:57:16,462 INFO [M:0;asf910:53736] hbase.ChoreService(327): Chore service for: master/asf910:0 had [] on shutdown 2018-12-04 20:57:16,463 DEBUG [M:0;asf910:53736] master.HMaster(1397): Stopping service threads 2018-12-04 20:57:16,491 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2018-12-04 20:57:16,491 DEBUG [M:0;asf910:53736] zookeeper.ZKUtil(614): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2018-12-04 20:57:16,491 WARN [M:0;asf910:53736] master.ActiveMasterManager(271): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2018-12-04 20:57:16,491 INFO [M:0;asf910:53736] assignment.AssignmentManager(229): Stopping assignment manager 2018-12-04 20:57:16,492 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): master:53736-0x1677afb1afa0000, quorum=localhost:64381, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2018-12-04 20:57:16,493 INFO [M:0;asf910:53736] procedure2.RemoteProcedureDispatcher(116): Stopping procedure remote dispatcher 2018-12-04 20:57:17,232 INFO [master/asf910:0.splitLogManager..Chore.1] hbase.ScheduledChore(180): Chore: SplitLogManager Timeout Monitor was stopped 2018-12-04 20:57:17,352 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(157): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-12-04 20:57:18,745 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2.2510sec; sending interrupt 2018-12-04 20:57:20,746 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4.2530sec; sending interrupt 2018-12-04 20:57:21,109 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-12-04 20:57:22,748 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 6.2540sec; sending interrupt 2018-12-04 20:57:24,751 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 8.2580sec; sending interrupt 2018-12-04 20:57:26,752 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 10.2590sec; sending interrupt 2018-12-04 20:57:28,753 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 12.2600sec; sending interrupt 2018-12-04 20:57:30,762 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 14.2690sec; sending interrupt 2018-12-04 20:57:32,763 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 16.2700sec; sending interrupt 2018-12-04 20:57:34,764 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 18.2710sec; sending interrupt 2018-12-04 20:57:36,766 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 20.2730sec; sending interrupt 2018-12-04 20:57:38,767 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 22.2740sec; sending interrupt 2018-12-04 20:57:40,769 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 24.2760sec; sending interrupt 2018-12-04 20:57:42,770 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 26.2770sec; sending interrupt 2018-12-04 20:57:44,773 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 28.2800sec; sending interrupt 2018-12-04 20:57:46,778 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 30.2850sec; sending interrupt 2018-12-04 20:57:48,779 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 32.2860sec; sending interrupt 2018-12-04 20:57:50,780 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 34.2870sec; sending interrupt 2018-12-04 20:57:52,783 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 36.2900sec; sending interrupt 2018-12-04 20:57:54,784 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 38.2910sec; sending interrupt 2018-12-04 20:57:56,790 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 40.2970sec; sending interrupt 2018-12-04 20:57:58,791 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 42.2980sec; sending interrupt 2018-12-04 20:58:00,793 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 44.2990sec; sending interrupt 2018-12-04 20:58:02,794 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 46.3010sec; sending interrupt 2018-12-04 20:58:04,796 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 48.3030sec; sending interrupt 2018-12-04 20:58:06,798 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 50.3050sec; sending interrupt 2018-12-04 20:58:08,800 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 52.3060sec; sending interrupt 2018-12-04 20:58:10,801 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 54.3080sec; sending interrupt 2018-12-04 20:58:12,802 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 56.3090sec; sending interrupt 2018-12-04 20:58:14,803 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 58.3100sec; sending interrupt Process Thread Dump: Automatic Stack Trace every 60 seconds waiting on M:0;asf910:53736 240 active threads Thread 1500 (Timer for 'HBase' metrics system): State: TIMED_WAITING Blocked count: 0 Waited count: 6 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 1402 (process reaper): State: TIMED_WAITING Blocked count: 2 Waited count: 60 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1353 (RS-EventLoopGroup-4-12): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1342 (RS-EventLoopGroup-4-11): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1320 (RS-EventLoopGroup-4-10): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1308 (RS-EventLoopGroup-4-9): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1300 (IPC Parameter Sending Thread #3): State: TIMED_WAITING Blocked count: 0 Waited count: 647 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1285 (RS-EventLoopGroup-4-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1272 (RS-EventLoopGroup-4-7): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1120 (RS-EventLoopGroup-4-6): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1011 (RS-EventLoopGroup-3-4): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1010 (Default-IPC-NioEventLoopGroup-7-4): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1009 (Default-IPC-NioEventLoopGroup-7-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 882 (RS-EventLoopGroup-4-5): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 745 (RS-EventLoopGroup-1-5): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 744 (Default-IPC-NioEventLoopGroup-7-2): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 743 (RS-EventLoopGroup-4-4): State: RUNNABLE Blocked count: 6 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 742 (Default-IPC-NioEventLoopGroup-7-1): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 737 (region-location-1): State: WAITING Blocked count: 3 Waited count: 7 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 736 (region-location-0): State: WAITING Blocked count: 1 Waited count: 3 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 732 (RS-EventLoopGroup-3-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 731 (RS-EventLoopGroup-5-32): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 725 (RS-EventLoopGroup-3-2): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 724 (RS-EventLoopGroup-5-31): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 701 (RS-EventLoopGroup-4-3): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 700 (RS-EventLoopGroup-5-30): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 695 (RS-EventLoopGroup-5-28): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 694 (RS-EventLoopGroup-5-29): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 691 (RS-EventLoopGroup-5-27): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 687 (RS-EventLoopGroup-5-26): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 686 (RS-EventLoopGroup-5-25): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 684 (RS-EventLoopGroup-5-24): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 682 (RS-EventLoopGroup-4-2): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 681 (RS-EventLoopGroup-5-23): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 675 (RS-EventLoopGroup-5-22): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 672 (RS-EventLoopGroup-5-16): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 666 (RS-EventLoopGroup-5-14): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 674 (RS-EventLoopGroup-5-15): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 673 (RS-EventLoopGroup-5-17): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 671 (RS-EventLoopGroup-5-18): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 670 (RS-EventLoopGroup-5-20): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 669 (RS-EventLoopGroup-5-21): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 667 (RS-EventLoopGroup-5-19): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 650 (RS-EventLoopGroup-5-11): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 647 (RS-EventLoopGroup-5-13): State: RUNNABLE Blocked count: 9 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 646 (RS-EventLoopGroup-5-12): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 645 (RS-EventLoopGroup-5-10): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 644 (RS-EventLoopGroup-5-9): State: RUNNABLE Blocked count: 3 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 643 (RS-EventLoopGroup-5-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 642 (RS-EventLoopGroup-5-7): State: RUNNABLE Blocked count: 7 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 641 (RS-EventLoopGroup-5-5): State: RUNNABLE Blocked count: 1 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 640 (RS-EventLoopGroup-5-6): State: RUNNABLE Blocked count: 5 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 638 (LeaseRenewer:jenkins.hfs.0@localhost:45471): State: TIMED_WAITING Blocked count: 16 Waited count: 586 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) java.lang.Thread.run(Thread.java:748) Thread 626 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 2 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 624 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 2 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 622 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 2 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 621 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 2 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 602 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7a4b6fa6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 604 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@35397d65 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 603 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@294176ac Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 581 (RegionServerTracker-0): State: WAITING Blocked count: 7 Waited count: 8 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@31bfbac5 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 580 (master/asf910:0:becomeActiveMaster-HFileCleaner.small.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@38e47ecb Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:550) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:234) Thread 579 (master/asf910:0:becomeActiveMaster-HFileCleaner.large.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61dd31d9 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:219) Thread 578 (snapshot-hfile-cleaner-cache-refresher): State: TIMED_WAITING Blocked count: 5 Waited count: 13 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 576 (OldWALsCleaner-1): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 575 (OldWALsCleaner-0): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 574 (master/asf910:0:becomeActiveMaster-EventThread): State: WAITING Blocked count: 0 Waited count: 2 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5df7a6e3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 573 (master/asf910:0:becomeActiveMaster-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 527 (PEWorker-1): State: BLOCKED Blocked count: 10 Waited count: 89 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isTakingSnapshot(SnapshotManager.java:423) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.prepareSplitRegion(SplitTableRegionProcedure.java:470) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:244) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:97) org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189) org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039) Thread 572 (threadDeathWatcher-6-1): State: TIMED_WAITING Blocked count: 0 Waited count: 556 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 571 (RS-EventLoopGroup-1-4): State: RUNNABLE Blocked count: 47 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 570 (RS-EventLoopGroup-1-3): State: RUNNABLE Blocked count: 36 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 569 (RS-EventLoopGroup-1-2): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 523 (RpcClient-timer-pool1-t1): State: TIMED_WAITING Blocked count: 0 Waited count: 55522 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459) java.lang.Thread.run(Thread.java:748) Thread 568 (RS-EventLoopGroup-5-3): State: RUNNABLE Blocked count: 34 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 567 (RS-EventLoopGroup-5-4): State: RUNNABLE Blocked count: 33 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 566 (RS-EventLoopGroup-5-2): State: RUNNABLE Blocked count: 39 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 564 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 83 Waited count: 83 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 565 (ResponseProcessor for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847) Thread 563 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 33 Waited count: 31 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 562 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[]): State: WAITING Blocked count: 169 Waited count: 170 Waiting on java.util.LinkedList@6be3c98f Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) java.lang.Thread.run(Thread.java:748) Thread 561 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 560 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 559 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 5 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 544 (DataStreamer for file /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/MasterProcWALs/pv2-00000000000000000001.log block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: TIMED_WAITING Blocked count: 307 Waited count: 326 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 525 (WALProcedureStoreSyncThread): State: TIMED_WAITING Blocked count: 307 Waited count: 509 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:822) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:111) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:313) Thread 524 (Idle-Rpc-Conn-Sweeper-pool2-t1): State: TIMED_WAITING Blocked count: 0 Waited count: 37 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 519 (Thread-186): State: TIMED_WAITING Blocked count: 0 Waited count: 557 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 517 (master/asf910:0.splitLogManager..Chore.1): State: WAITING Blocked count: 0 Waited count: 499 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4440678d Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 489 (org.apache.hadoop.hdfs.PeerCache@688f09e2): State: TIMED_WAITING Blocked count: 0 Waited count: 186 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:255) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:748) Thread 485 (Monitor thread for TaskMonitor): State: TIMED_WAITING Blocked count: 0 Waited count: 56 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) java.lang.Thread.run(Thread.java:748) Thread 423 (M:0;asf910:53736): State: TIMED_WAITING Blocked count: 6 Waited count: 5201 Stack: java.lang.Object.wait(Native Method) java.lang.Thread.join(Thread.java:1260) org.apache.hadoop.hbase.procedure2.StoppableThread.awaitTermination(StoppableThread.java:42) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.join(ProcedureExecutor.java:697) org.apache.hadoop.hbase.master.HMaster.stopProcedureExecutor(HMaster.java:1470) org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1413) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1133) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:595) java.lang.Thread.run(Thread.java:748) Thread 466 (RS-EventLoopGroup-5-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 447 (RS-EventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 425 (RS-EventLoopGroup-3-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 422 (RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@724e7839 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 421 (RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5ff22b89 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 420 (RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@69da2841 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 419 (RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 3 Waiting on java.util.concurrent.Semaphore$NonfairSync@427563ce Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 418 (RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@43fb5409 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 417 (RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@d9b919c Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 416 (RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@35bb384e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 415 (RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5c394624 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 414 (RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@45fb1ab6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 413 (RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736): State: BLOCKED Blocked count: 70 Waited count: 1713 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736): State: TIMED_WAITING Blocked count: 60 Waited count: 359 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.tryAcquire(LockManager.java:162) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.acquire(LockManager.java:123) org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.prepare(TakeSnapshotHandler.java:141) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:60) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:46) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotTable(SnapshotManager.java:524) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:510) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:633) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 411 (RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736): State: BLOCKED Blocked count: 50 Waited count: 1102 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 410 (RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736): State: BLOCKED Blocked count: 40 Waited count: 2534 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 409 (RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736): State: BLOCKED Blocked count: 12 Waited count: 2935 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 408 (Time-limited test-EventThread): State: WAITING Blocked count: 15 Waited count: 29 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@225b2ff1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 407 (Time-limited test-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 9 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 404 (RS-EventLoopGroup-1-1): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 403 (HBase-Metrics2-1): State: TIMED_WAITING Blocked count: 0 Waited count: 341 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 394 (LeaseRenewer:jenkins@localhost:45471): State: TIMED_WAITING Blocked count: 17 Waited count: 595 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) java.lang.Thread.run(Thread.java:748) Thread 391 (ProcessThread(sid:0 cport:64381):): State: WAITING Blocked count: 0 Waited count: 2072 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4eede27 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122) Thread 390 (SyncThread:0): State: WAITING Blocked count: 3 Waited count: 1987 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7530c34 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127) Thread 389 (SessionTracker): State: TIMED_WAITING Blocked count: 0 Waited count: 281 Stack: java.lang.Object.wait(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146) Thread 388 (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:64381): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) java.lang.Thread.run(Thread.java:748) Thread 387 (java.util.concurrent.ThreadPoolExecutor$Worker@ee3a971[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 382 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 381 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 376 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@6f2e2192): State: TIMED_WAITING Blocked count: 0 Waited count: 10 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 375 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 374 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 373 (java.util.concurrent.ThreadPoolExecutor$Worker@3f568709[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 368 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 367 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 366 (java.util.concurrent.ThreadPoolExecutor$Worker@507b9b35[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 361 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 360 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 350 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@199cccae): State: TIMED_WAITING Blocked count: 0 Waited count: 10 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 349 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@28232ce3): State: TIMED_WAITING Blocked count: 0 Waited count: 10 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 348 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2)): State: TIMED_WAITING Blocked count: 17 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 347 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4)): State: TIMED_WAITING Blocked count: 16 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 346 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 345 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 335 (IPC Server handler 9 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 562 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 334 (IPC Server handler 8 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 568 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 333 (IPC Server handler 7 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 570 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 332 (IPC Server handler 6 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 572 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 331 (IPC Server handler 5 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 569 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 330 (IPC Server handler 4 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 567 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 329 (IPC Server handler 3 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 569 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 328 (IPC Server handler 2 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 565 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 327 (IPC Server handler 1 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 566 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 326 (IPC Server handler 0 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 564 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 321 (IPC Server listener on 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 324 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 251 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6dda7de8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 325 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 298 Waited count: 699 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 323 (IPC Server idle connection scanner for port 33303): State: TIMED_WAITING Blocked count: 1 Waited count: 58 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 322 (Socket Reader #1 for port 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 320 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@717546f2): State: TIMED_WAITING Blocked count: 0 Waited count: 1123 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 256 (nioEventLoopGroup-6-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 255 (Timer-3): State: TIMED_WAITING Blocked count: 0 Waited count: 19 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 254 (1921255371@qtp-2074985929-1): State: TIMED_WAITING Blocked count: 0 Waited count: 10 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 253 (1142869926@qtp-2074985929-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46872): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 252 (pool-7-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 246 (IPC Server handler 9 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 572 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 245 (IPC Server handler 8 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 568 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 244 (IPC Server handler 7 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 573 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 243 (IPC Server handler 6 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 570 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 242 (IPC Server handler 5 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 563 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 241 (IPC Server handler 4 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 563 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 240 (IPC Server handler 3 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 566 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 239 (IPC Server handler 2 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 570 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 238 (IPC Server handler 1 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 564 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 237 (IPC Server handler 0 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 568 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 232 (IPC Server listener on 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 235 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 160 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3d7bc726): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 236 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 325 Waited count: 694 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 234 (IPC Server idle connection scanner for port 59129): State: TIMED_WAITING Blocked count: 1 Waited count: 58 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 233 (Socket Reader #1 for port 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 231 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@70916a98): State: TIMED_WAITING Blocked count: 0 Waited count: 1125 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 167 (nioEventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 166 (Timer-2): State: TIMED_WAITING Blocked count: 0 Waited count: 19 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 164 (IPC Client (291152797) connection to localhost/127.0.0.1:45471 from jenkins): State: TIMED_WAITING Blocked count: 628 Waited count: 626 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) Thread 163 (251275394@qtp-414562224-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:57500): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 162 (212430440@qtp-414562224-0): State: TIMED_WAITING Blocked count: 0 Waited count: 10 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 161 (pool-6-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 155 (IPC Server handler 9 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 569 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 154 (IPC Server handler 8 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 568 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 153 (IPC Server handler 7 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 578 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 152 (IPC Server handler 6 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 581 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 151 (IPC Server handler 5 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 574 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 150 (IPC Server handler 4 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 581 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 149 (IPC Server handler 3 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 578 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 148 (IPC Server handler 2 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 575 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 147 (IPC Server handler 1 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 579 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 146 (IPC Server handler 0 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 574 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 141 (IPC Server listener on 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 144 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 70 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2a3ed352): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 145 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 323 Waited count: 691 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 143 (IPC Server idle connection scanner for port 33361): State: TIMED_WAITING Blocked count: 1 Waited count: 58 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 142 (Socket Reader #1 for port 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 140 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@32a2fdb6): State: TIMED_WAITING Blocked count: 0 Waited count: 1126 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 75 (nioEventLoopGroup-2-1): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 74 (Timer-1): State: TIMED_WAITING Blocked count: 0 Waited count: 19 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 73 (1473617757@qtp-1074412217-1): State: TIMED_WAITING Blocked count: 0 Waited count: 10 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 72 (944216257@qtp-1074412217-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34827): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 71 (pool-4-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 65 (CacheReplicationMonitor(727192365)): State: TIMED_WAITING Blocked count: 0 Waited count: 20 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Thread 64 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@306f96e5): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:4739) java.lang.Thread.run(Thread.java:748) Thread 63 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@13e751be): State: TIMED_WAITING Blocked count: 0 Waited count: 2 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4656) java.lang.Thread.run(Thread.java:748) Thread 62 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5b83ab23): State: TIMED_WAITING Blocked count: 0 Waited count: 113 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4612) java.lang.Thread.run(Thread.java:748) Thread 61 (org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@7e547db5): State: TIMED_WAITING Blocked count: 0 Waited count: 283 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:431) java.lang.Thread.run(Thread.java:748) Thread 60 (IPC Server handler 9 on 45471): State: TIMED_WAITING Blocked count: 16 Waited count: 714 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 59 (IPC Server handler 8 on 45471): State: TIMED_WAITING Blocked count: 13 Waited count: 713 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 58 (IPC Server handler 7 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 712 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 57 (IPC Server handler 6 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 721 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 56 (IPC Server handler 5 on 45471): State: TIMED_WAITING Blocked count: 12 Waited count: 725 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 55 (IPC Server handler 4 on 45471): State: TIMED_WAITING Blocked count: 8 Waited count: 712 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 54 (IPC Server handler 3 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 713 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 53 (IPC Server handler 2 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 711 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 52 (IPC Server handler 1 on 45471): State: TIMED_WAITING Blocked count: 21 Waited count: 723 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 51 (IPC Server handler 0 on 45471): State: TIMED_WAITING Blocked count: 35 Waited count: 721 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 41 (IPC Server listener on 45471): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 44 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 38 (Block report processor): State: WAITING Blocked count: 6 Waited count: 105 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1c8f1d10 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:3860) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:3849) Thread 37 (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652): State: TIMED_WAITING Blocked count: 1 Waited count: 188 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3635) java.lang.Thread.run(Thread.java:748) Thread 39 (org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6918b9d): State: TIMED_WAITING Blocked count: 0 Waited count: 113 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:401) java.lang.Thread.run(Thread.java:748) Thread 50 (DecommissionMonitor-0): State: TIMED_WAITING Blocked count: 0 Waited count: 188 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 49 (org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@2e9478fa): State: TIMED_WAITING Blocked count: 0 Waited count: 2 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221) java.lang.Thread.run(Thread.java:748) Thread 45 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@6008e15b): State: TIMED_WAITING Blocked count: 0 Waited count: 1128 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 43 (IPC Server idle connection scanner for port 45471): State: TIMED_WAITING Blocked count: 1 Waited count: 58 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 42 (Socket Reader #1 for port 45471): State: RUNNABLE Blocked count: 2 Waited count: 3 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 36 (Timer-0): State: TIMED_WAITING Blocked count: 0 Waited count: 19 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 35 (2067905146@qtp-1695695008-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54312): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 34 (1265201788@qtp-1695695008-0): State: TIMED_WAITING Blocked count: 0 Waited count: 10 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 33 (pool-2-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 24 (org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner): State: WAITING Blocked count: 1 Waited count: 2 Waiting on java.lang.ref.ReferenceQueue$Lock@fcd6d08 Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3060) java.lang.Thread.run(Thread.java:748) Thread 23 (Time-limited test): State: RUNNABLE Blocked count: 280 Waited count: 473 Stack: sun.management.ThreadImpl.getThreadInfo1(Native Method) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:178) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139) org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:168) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.hadoop.hbase.util.Threads$PrintThreadInfoLazyHolder$1.printThreadInfo(Threads.java:294) org.apache.hadoop.hbase.util.Threads.printThreadInfo(Threads.java:341) org.apache.hadoop.hbase.util.Threads.threadDumpingIsAlive(Threads.java:135) org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:400) org.apache.hadoop.hbase.MiniHBaseCluster.waitUntilShutDown(MiniHBaseCluster.java:861) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:1123) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1105) org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDownAfterClass(RestoreSnapshotFromClientTestBase.java:73) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) Thread 19 (surefire-forkedjvm-ping-30s): State: TIMED_WAITING Blocked count: 569 Waited count: 1116 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 18 (surefire-forkedjvm-command-thread): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:255) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readInt(DataInputStream.java:387) org.apache.maven.surefire.booter.MasterProcessCommand.decode(MasterProcessCommand.java:115) org.apache.maven.surefire.booter.CommandReader$CommandRunnable.run(CommandReader.java:391) java.lang.Thread.run(Thread.java:748) Thread 4 (Signal Dispatcher): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: Thread 3 (Finalizer): State: WAITING Blocked count: 19 Waited count: 10 Waiting on java.lang.ref.ReferenceQueue$Lock@39047c6c Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216) Thread 2 (Reference Handler): State: WAITING Blocked count: 9 Waited count: 7 Waiting on java.lang.ref.Reference$Lock@5dc5e369 Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) java.lang.ref.Reference.tryHandlePending(Reference.java:191) java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) Thread 1 (main): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.FutureTask.awaitDone(FutureTask.java:426) java.util.concurrent.FutureTask.get(FutureTask.java:204) org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:141) org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:127) org.junit.rules.RunRules.evaluate(RunRules.java:20) org.junit.runners.ParentRunner.run(ParentRunner.java:363) org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 2018-12-04 20:58:16,806 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 0.313sec; sending interrupt 2018-12-04 20:58:18,808 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 2.315sec; sending interrupt 2018-12-04 20:58:20,810 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 4.317sec; sending interrupt 2018-12-04 20:58:22,811 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 6.318sec; sending interrupt 2018-12-04 20:58:24,813 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 8.32sec; sending interrupt 2018-12-04 20:58:26,814 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 10.321sec; sending interrupt 2018-12-04 20:58:28,818 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 12.325sec; sending interrupt 2018-12-04 20:58:30,826 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 14.333sec; sending interrupt 2018-12-04 20:58:32,827 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 16.334sec; sending interrupt 2018-12-04 20:58:34,829 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 18.336sec; sending interrupt 2018-12-04 20:58:36,831 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 20.338sec; sending interrupt 2018-12-04 20:58:38,832 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 22.339sec; sending interrupt 2018-12-04 20:58:40,836 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 24.343sec; sending interrupt 2018-12-04 20:58:42,837 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 26.344sec; sending interrupt 2018-12-04 20:58:44,842 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 28.349sec; sending interrupt 2018-12-04 20:58:46,843 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 30.35sec; sending interrupt 2018-12-04 20:58:48,844 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 32.351sec; sending interrupt 2018-12-04 20:58:50,845 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 34.352sec; sending interrupt 2018-12-04 20:58:52,851 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 36.358sec; sending interrupt 2018-12-04 20:58:54,852 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 38.359sec; sending interrupt 2018-12-04 20:58:56,853 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 40.36sec; sending interrupt 2018-12-04 20:58:58,855 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 42.362sec; sending interrupt 2018-12-04 20:59:00,856 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 44.363sec; sending interrupt 2018-12-04 20:59:01,857 DEBUG [RS:2;asf910:36011-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): data stats (chunk size=2097152): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-12-04 20:59:01,857 DEBUG [RS:1;asf910:51486-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): data stats (chunk size=2097152): current pool size=9, created chunk count=9, reused chunk count=6, reuseRatio=40.00% 2018-12-04 20:59:01,857 DEBUG [RS:2;asf910:36011-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-12-04 20:59:01,858 DEBUG [RS:1;asf910:51486-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(428): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2018-12-04 20:59:02,857 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 46.364sec; sending interrupt 2018-12-04 20:59:04,859 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 48.365sec; sending interrupt 2018-12-04 20:59:06,860 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 50.367sec; sending interrupt 2018-12-04 20:59:08,210 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-12-04 20:59:08,861 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 52.368sec; sending interrupt 2018-12-04 20:59:10,862 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 54.369sec; sending interrupt 2018-12-04 20:59:12,863 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 56.37sec; sending interrupt 2018-12-04 20:59:14,864 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 1mins, 58.371sec; sending interrupt Process Thread Dump: Automatic Stack Trace every 60 seconds waiting on M:0;asf910:53736 239 active threads Thread 1508 (Timer for 'HBase' metrics system): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 1402 (process reaper): State: TIMED_WAITING Blocked count: 2 Waited count: 122 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1353 (RS-EventLoopGroup-4-12): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1342 (RS-EventLoopGroup-4-11): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1320 (RS-EventLoopGroup-4-10): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1308 (RS-EventLoopGroup-4-9): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1300 (IPC Parameter Sending Thread #3): State: TIMED_WAITING Blocked count: 0 Waited count: 710 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1285 (RS-EventLoopGroup-4-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1272 (RS-EventLoopGroup-4-7): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1120 (RS-EventLoopGroup-4-6): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1011 (RS-EventLoopGroup-3-4): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1010 (Default-IPC-NioEventLoopGroup-7-4): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1009 (Default-IPC-NioEventLoopGroup-7-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 882 (RS-EventLoopGroup-4-5): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 745 (RS-EventLoopGroup-1-5): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 744 (Default-IPC-NioEventLoopGroup-7-2): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 743 (RS-EventLoopGroup-4-4): State: RUNNABLE Blocked count: 6 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 742 (Default-IPC-NioEventLoopGroup-7-1): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 737 (region-location-1): State: WAITING Blocked count: 3 Waited count: 7 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 736 (region-location-0): State: WAITING Blocked count: 1 Waited count: 3 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 732 (RS-EventLoopGroup-3-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 731 (RS-EventLoopGroup-5-32): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 725 (RS-EventLoopGroup-3-2): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 724 (RS-EventLoopGroup-5-31): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 701 (RS-EventLoopGroup-4-3): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 700 (RS-EventLoopGroup-5-30): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 695 (RS-EventLoopGroup-5-28): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 694 (RS-EventLoopGroup-5-29): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 691 (RS-EventLoopGroup-5-27): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 687 (RS-EventLoopGroup-5-26): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 686 (RS-EventLoopGroup-5-25): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 684 (RS-EventLoopGroup-5-24): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 682 (RS-EventLoopGroup-4-2): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 681 (RS-EventLoopGroup-5-23): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 675 (RS-EventLoopGroup-5-22): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 672 (RS-EventLoopGroup-5-16): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 666 (RS-EventLoopGroup-5-14): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 674 (RS-EventLoopGroup-5-15): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 673 (RS-EventLoopGroup-5-17): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 671 (RS-EventLoopGroup-5-18): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 670 (RS-EventLoopGroup-5-20): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 669 (RS-EventLoopGroup-5-21): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 667 (RS-EventLoopGroup-5-19): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 650 (RS-EventLoopGroup-5-11): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 647 (RS-EventLoopGroup-5-13): State: RUNNABLE Blocked count: 9 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 646 (RS-EventLoopGroup-5-12): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 645 (RS-EventLoopGroup-5-10): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 644 (RS-EventLoopGroup-5-9): State: RUNNABLE Blocked count: 3 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 643 (RS-EventLoopGroup-5-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 642 (RS-EventLoopGroup-5-7): State: RUNNABLE Blocked count: 7 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 641 (RS-EventLoopGroup-5-5): State: RUNNABLE Blocked count: 1 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 640 (RS-EventLoopGroup-5-6): State: RUNNABLE Blocked count: 5 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 626 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 624 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 622 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 621 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 602 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7a4b6fa6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 604 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@35397d65 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 603 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@294176ac Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 581 (RegionServerTracker-0): State: WAITING Blocked count: 7 Waited count: 8 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@31bfbac5 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 580 (master/asf910:0:becomeActiveMaster-HFileCleaner.small.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@38e47ecb Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:550) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:234) Thread 579 (master/asf910:0:becomeActiveMaster-HFileCleaner.large.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61dd31d9 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:219) Thread 578 (snapshot-hfile-cleaner-cache-refresher): State: TIMED_WAITING Blocked count: 6 Waited count: 16 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 576 (OldWALsCleaner-1): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 575 (OldWALsCleaner-0): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 574 (master/asf910:0:becomeActiveMaster-EventThread): State: WAITING Blocked count: 0 Waited count: 2 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5df7a6e3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 573 (master/asf910:0:becomeActiveMaster-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 527 (PEWorker-1): State: BLOCKED Blocked count: 10 Waited count: 89 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isTakingSnapshot(SnapshotManager.java:423) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.prepareSplitRegion(SplitTableRegionProcedure.java:470) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:244) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:97) org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189) org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039) Thread 572 (threadDeathWatcher-6-1): State: TIMED_WAITING Blocked count: 0 Waited count: 616 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 571 (RS-EventLoopGroup-1-4): State: RUNNABLE Blocked count: 47 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 570 (RS-EventLoopGroup-1-3): State: RUNNABLE Blocked count: 36 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 569 (RS-EventLoopGroup-1-2): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 523 (RpcClient-timer-pool1-t1): State: TIMED_WAITING Blocked count: 0 Waited count: 61539 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459) java.lang.Thread.run(Thread.java:748) Thread 568 (RS-EventLoopGroup-5-3): State: RUNNABLE Blocked count: 34 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 567 (RS-EventLoopGroup-5-4): State: RUNNABLE Blocked count: 33 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 566 (RS-EventLoopGroup-5-2): State: RUNNABLE Blocked count: 39 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 564 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 83 Waited count: 83 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 565 (ResponseProcessor for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847) Thread 563 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 33 Waited count: 31 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 562 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[]): State: WAITING Blocked count: 171 Waited count: 172 Waiting on java.util.LinkedList@6be3c98f Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) java.lang.Thread.run(Thread.java:748) Thread 561 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 560 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 559 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 5 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 544 (DataStreamer for file /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/MasterProcWALs/pv2-00000000000000000001.log block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: TIMED_WAITING Blocked count: 307 Waited count: 328 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 525 (WALProcedureStoreSyncThread): State: TIMED_WAITING Blocked count: 307 Waited count: 509 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:822) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:111) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:313) Thread 524 (Idle-Rpc-Conn-Sweeper-pool2-t1): State: WAITING Blocked count: 0 Waited count: 46 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@57e62326 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 519 (Thread-186): State: TIMED_WAITING Blocked count: 0 Waited count: 617 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 517 (master/asf910:0.splitLogManager..Chore.1): State: WAITING Blocked count: 0 Waited count: 499 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4440678d Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 489 (org.apache.hadoop.hdfs.PeerCache@688f09e2): State: TIMED_WAITING Blocked count: 0 Waited count: 206 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:255) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:748) Thread 485 (Monitor thread for TaskMonitor): State: TIMED_WAITING Blocked count: 0 Waited count: 62 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) java.lang.Thread.run(Thread.java:748) Thread 423 (M:0;asf910:53736): State: TIMED_WAITING Blocked count: 6 Waited count: 5442 Stack: java.lang.Object.wait(Native Method) java.lang.Thread.join(Thread.java:1260) org.apache.hadoop.hbase.procedure2.StoppableThread.awaitTermination(StoppableThread.java:42) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.join(ProcedureExecutor.java:697) org.apache.hadoop.hbase.master.HMaster.stopProcedureExecutor(HMaster.java:1470) org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1413) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1133) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:595) java.lang.Thread.run(Thread.java:748) Thread 466 (RS-EventLoopGroup-5-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 447 (RS-EventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 425 (RS-EventLoopGroup-3-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 422 (RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@724e7839 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 421 (RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5ff22b89 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 420 (RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@69da2841 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 419 (RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 3 Waiting on java.util.concurrent.Semaphore$NonfairSync@427563ce Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 418 (RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@43fb5409 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 417 (RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@d9b919c Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 416 (RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@35bb384e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 415 (RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5c394624 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 414 (RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@45fb1ab6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 413 (RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736): State: BLOCKED Blocked count: 70 Waited count: 1713 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736): State: TIMED_WAITING Blocked count: 60 Waited count: 359 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.tryAcquire(LockManager.java:162) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.acquire(LockManager.java:123) org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.prepare(TakeSnapshotHandler.java:141) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:60) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:46) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotTable(SnapshotManager.java:524) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:510) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:633) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 411 (RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736): State: BLOCKED Blocked count: 50 Waited count: 1102 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 410 (RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736): State: BLOCKED Blocked count: 40 Waited count: 2534 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 409 (RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736): State: BLOCKED Blocked count: 12 Waited count: 2935 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 408 (Time-limited test-EventThread): State: WAITING Blocked count: 15 Waited count: 29 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@225b2ff1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 407 (Time-limited test-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 9 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 404 (RS-EventLoopGroup-1-1): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 403 (HBase-Metrics2-1): State: TIMED_WAITING Blocked count: 0 Waited count: 372 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 394 (LeaseRenewer:jenkins@localhost:45471): State: TIMED_WAITING Blocked count: 19 Waited count: 659 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) java.lang.Thread.run(Thread.java:748) Thread 391 (ProcessThread(sid:0 cport:64381):): State: WAITING Blocked count: 0 Waited count: 2082 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4eede27 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122) Thread 390 (SyncThread:0): State: WAITING Blocked count: 3 Waited count: 1997 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7530c34 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127) Thread 389 (SessionTracker): State: TIMED_WAITING Blocked count: 0 Waited count: 311 Stack: java.lang.Object.wait(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146) Thread 388 (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:64381): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) java.lang.Thread.run(Thread.java:748) Thread 387 (java.util.concurrent.ThreadPoolExecutor$Worker@ee3a971[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 382 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 381 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 376 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@6f2e2192): State: TIMED_WAITING Blocked count: 0 Waited count: 11 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 375 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 374 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 373 (java.util.concurrent.ThreadPoolExecutor$Worker@3f568709[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 368 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 367 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 366 (java.util.concurrent.ThreadPoolExecutor$Worker@507b9b35[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 361 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 360 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 350 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@199cccae): State: TIMED_WAITING Blocked count: 0 Waited count: 11 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 349 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@28232ce3): State: TIMED_WAITING Blocked count: 0 Waited count: 11 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 348 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2)): State: TIMED_WAITING Blocked count: 17 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 347 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4)): State: TIMED_WAITING Blocked count: 16 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 346 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 345 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 335 (IPC Server handler 9 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 622 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 334 (IPC Server handler 8 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 628 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 333 (IPC Server handler 7 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 630 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 332 (IPC Server handler 6 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 632 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 331 (IPC Server handler 5 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 629 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 330 (IPC Server handler 4 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 630 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 329 (IPC Server handler 3 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 629 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 328 (IPC Server handler 2 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 625 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 327 (IPC Server handler 1 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 626 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 326 (IPC Server handler 0 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 624 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 321 (IPC Server listener on 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 324 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 251 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6dda7de8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 325 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 318 Waited count: 759 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 323 (IPC Server idle connection scanner for port 33303): State: TIMED_WAITING Blocked count: 1 Waited count: 64 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 322 (Socket Reader #1 for port 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 320 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@717546f2): State: TIMED_WAITING Blocked count: 0 Waited count: 1243 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 256 (nioEventLoopGroup-6-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 255 (Timer-3): State: TIMED_WAITING Blocked count: 0 Waited count: 21 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 254 (1921255371@qtp-2074985929-1): State: TIMED_WAITING Blocked count: 0 Waited count: 11 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 253 (1142869926@qtp-2074985929-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46872): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 252 (pool-7-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 246 (IPC Server handler 9 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 632 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 245 (IPC Server handler 8 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 628 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 244 (IPC Server handler 7 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 633 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 243 (IPC Server handler 6 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 630 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 242 (IPC Server handler 5 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 624 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 241 (IPC Server handler 4 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 623 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 240 (IPC Server handler 3 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 626 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 239 (IPC Server handler 2 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 630 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 238 (IPC Server handler 1 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 624 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 237 (IPC Server handler 0 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 628 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 232 (IPC Server listener on 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 235 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 160 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3d7bc726): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 236 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 343 Waited count: 752 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 234 (IPC Server idle connection scanner for port 59129): State: TIMED_WAITING Blocked count: 1 Waited count: 64 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 233 (Socket Reader #1 for port 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 231 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@70916a98): State: TIMED_WAITING Blocked count: 0 Waited count: 1245 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 167 (nioEventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 166 (Timer-2): State: TIMED_WAITING Blocked count: 0 Waited count: 21 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 164 (IPC Client (291152797) connection to localhost/127.0.0.1:45471 from jenkins): State: TIMED_WAITING Blocked count: 688 Waited count: 686 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) Thread 163 (251275394@qtp-414562224-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:57500): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 162 (212430440@qtp-414562224-0): State: TIMED_WAITING Blocked count: 0 Waited count: 11 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 161 (pool-6-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 155 (IPC Server handler 9 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 629 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 154 (IPC Server handler 8 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 628 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 153 (IPC Server handler 7 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 639 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 152 (IPC Server handler 6 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 641 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 151 (IPC Server handler 5 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 634 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 150 (IPC Server handler 4 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 641 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 149 (IPC Server handler 3 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 638 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 148 (IPC Server handler 2 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 635 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 147 (IPC Server handler 1 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 640 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 146 (IPC Server handler 0 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 634 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 141 (IPC Server listener on 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 144 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 70 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2a3ed352): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 145 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 342 Waited count: 750 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 143 (IPC Server idle connection scanner for port 33361): State: TIMED_WAITING Blocked count: 1 Waited count: 64 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 142 (Socket Reader #1 for port 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 140 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@32a2fdb6): State: TIMED_WAITING Blocked count: 0 Waited count: 1246 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 75 (nioEventLoopGroup-2-1): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 74 (Timer-1): State: TIMED_WAITING Blocked count: 0 Waited count: 21 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 73 (1473617757@qtp-1074412217-1): State: TIMED_WAITING Blocked count: 0 Waited count: 11 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 72 (944216257@qtp-1074412217-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34827): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 71 (pool-4-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 65 (CacheReplicationMonitor(727192365)): State: TIMED_WAITING Blocked count: 0 Waited count: 22 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Thread 64 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@306f96e5): State: TIMED_WAITING Blocked count: 1 Waited count: 4 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:4739) java.lang.Thread.run(Thread.java:748) Thread 63 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@13e751be): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4656) java.lang.Thread.run(Thread.java:748) Thread 62 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5b83ab23): State: TIMED_WAITING Blocked count: 0 Waited count: 125 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4612) java.lang.Thread.run(Thread.java:748) Thread 61 (org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@7e547db5): State: TIMED_WAITING Blocked count: 0 Waited count: 313 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:431) java.lang.Thread.run(Thread.java:748) Thread 60 (IPC Server handler 9 on 45471): State: TIMED_WAITING Blocked count: 16 Waited count: 774 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 59 (IPC Server handler 8 on 45471): State: TIMED_WAITING Blocked count: 13 Waited count: 773 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 58 (IPC Server handler 7 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 772 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 57 (IPC Server handler 6 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 781 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 56 (IPC Server handler 5 on 45471): State: TIMED_WAITING Blocked count: 12 Waited count: 785 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 55 (IPC Server handler 4 on 45471): State: TIMED_WAITING Blocked count: 8 Waited count: 772 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 54 (IPC Server handler 3 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 773 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 53 (IPC Server handler 2 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 771 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 52 (IPC Server handler 1 on 45471): State: TIMED_WAITING Blocked count: 21 Waited count: 785 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 51 (IPC Server handler 0 on 45471): State: TIMED_WAITING Blocked count: 35 Waited count: 782 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 41 (IPC Server listener on 45471): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 44 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 38 (Block report processor): State: WAITING Blocked count: 6 Waited count: 105 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1c8f1d10 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:3860) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:3849) Thread 37 (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652): State: TIMED_WAITING Blocked count: 1 Waited count: 208 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3635) java.lang.Thread.run(Thread.java:748) Thread 39 (org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6918b9d): State: TIMED_WAITING Blocked count: 0 Waited count: 125 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:401) java.lang.Thread.run(Thread.java:748) Thread 50 (DecommissionMonitor-0): State: TIMED_WAITING Blocked count: 0 Waited count: 209 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 49 (org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@2e9478fa): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221) java.lang.Thread.run(Thread.java:748) Thread 45 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@6008e15b): State: TIMED_WAITING Blocked count: 0 Waited count: 1248 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 43 (IPC Server idle connection scanner for port 45471): State: TIMED_WAITING Blocked count: 1 Waited count: 64 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 42 (Socket Reader #1 for port 45471): State: RUNNABLE Blocked count: 2 Waited count: 3 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 36 (Timer-0): State: TIMED_WAITING Blocked count: 0 Waited count: 21 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 35 (2067905146@qtp-1695695008-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54312): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 34 (1265201788@qtp-1695695008-0): State: TIMED_WAITING Blocked count: 0 Waited count: 11 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 33 (pool-2-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 24 (org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner): State: WAITING Blocked count: 1 Waited count: 2 Waiting on java.lang.ref.ReferenceQueue$Lock@fcd6d08 Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3060) java.lang.Thread.run(Thread.java:748) Thread 23 (Time-limited test): State: RUNNABLE Blocked count: 280 Waited count: 474 Stack: sun.management.ThreadImpl.getThreadInfo1(Native Method) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:178) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139) org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:168) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.hadoop.hbase.util.Threads$PrintThreadInfoLazyHolder$1.printThreadInfo(Threads.java:294) org.apache.hadoop.hbase.util.Threads.printThreadInfo(Threads.java:341) org.apache.hadoop.hbase.util.Threads.threadDumpingIsAlive(Threads.java:135) org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:400) org.apache.hadoop.hbase.MiniHBaseCluster.waitUntilShutDown(MiniHBaseCluster.java:861) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:1123) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1105) org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDownAfterClass(RestoreSnapshotFromClientTestBase.java:73) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) Thread 19 (surefire-forkedjvm-ping-30s): State: TIMED_WAITING Blocked count: 628 Waited count: 1236 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 18 (surefire-forkedjvm-command-thread): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:255) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readInt(DataInputStream.java:387) org.apache.maven.surefire.booter.MasterProcessCommand.decode(MasterProcessCommand.java:115) org.apache.maven.surefire.booter.CommandReader$CommandRunnable.run(CommandReader.java:391) java.lang.Thread.run(Thread.java:748) Thread 4 (Signal Dispatcher): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: Thread 3 (Finalizer): State: WAITING Blocked count: 19 Waited count: 10 Waiting on java.lang.ref.ReferenceQueue$Lock@39047c6c Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216) Thread 2 (Reference Handler): State: WAITING Blocked count: 9 Waited count: 7 Waiting on java.lang.ref.Reference$Lock@5dc5e369 Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) java.lang.ref.Reference.tryHandlePending(Reference.java:191) java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) Thread 1 (main): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.FutureTask.awaitDone(FutureTask.java:426) java.util.concurrent.FutureTask.get(FutureTask.java:204) org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:141) org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:127) org.junit.rules.RunRules.evaluate(RunRules.java:20) org.junit.runners.ParentRunner.run(ParentRunner.java:363) org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 2018-12-04 20:59:16,866 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 0.373sec; sending interrupt 2018-12-04 20:59:18,867 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 2.374sec; sending interrupt 2018-12-04 20:59:20,869 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 4.376sec; sending interrupt 2018-12-04 20:59:22,872 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 6.379sec; sending interrupt 2018-12-04 20:59:24,874 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 8.381sec; sending interrupt 2018-12-04 20:59:26,875 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 10.382sec; sending interrupt 2018-12-04 20:59:28,880 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 12.387sec; sending interrupt 2018-12-04 20:59:30,881 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 14.388sec; sending interrupt 2018-12-04 20:59:32,883 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 16.39sec; sending interrupt 2018-12-04 20:59:34,884 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 18.391sec; sending interrupt 2018-12-04 20:59:36,885 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 20.392sec; sending interrupt 2018-12-04 20:59:38,886 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 22.393sec; sending interrupt 2018-12-04 20:59:40,888 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 24.394sec; sending interrupt 2018-12-04 20:59:42,889 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 26.396sec; sending interrupt 2018-12-04 20:59:44,890 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 28.397sec; sending interrupt 2018-12-04 20:59:46,891 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 30.398sec; sending interrupt 2018-12-04 20:59:48,894 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 32.401sec; sending interrupt 2018-12-04 20:59:50,896 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 34.403sec; sending interrupt 2018-12-04 20:59:52,897 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 36.404sec; sending interrupt 2018-12-04 20:59:54,898 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 38.405sec; sending interrupt 2018-12-04 20:59:56,900 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 40.407sec; sending interrupt 2018-12-04 20:59:58,901 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 42.408sec; sending interrupt 2018-12-04 21:00:00,903 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 44.41sec; sending interrupt 2018-12-04 21:00:02,904 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 46.411sec; sending interrupt 2018-12-04 21:00:04,906 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 48.413sec; sending interrupt 2018-12-04 21:00:06,907 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 50.414sec; sending interrupt 2018-12-04 21:00:08,908 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 52.415sec; sending interrupt 2018-12-04 21:00:10,909 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 54.416sec; sending interrupt 2018-12-04 21:00:12,910 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 56.417sec; sending interrupt 2018-12-04 21:00:14,912 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 2mins, 58.419sec; sending interrupt Process Thread Dump: Automatic Stack Trace every 60 seconds waiting on M:0;asf910:53736 239 active threads Thread 1508 (Timer for 'HBase' metrics system): State: TIMED_WAITING Blocked count: 0 Waited count: 7 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 1402 (process reaper): State: TIMED_WAITING Blocked count: 2 Waited count: 183 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1353 (RS-EventLoopGroup-4-12): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1342 (RS-EventLoopGroup-4-11): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1320 (RS-EventLoopGroup-4-10): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1308 (RS-EventLoopGroup-4-9): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1300 (IPC Parameter Sending Thread #3): State: TIMED_WAITING Blocked count: 0 Waited count: 772 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1285 (RS-EventLoopGroup-4-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1272 (RS-EventLoopGroup-4-7): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1120 (RS-EventLoopGroup-4-6): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1011 (RS-EventLoopGroup-3-4): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1010 (Default-IPC-NioEventLoopGroup-7-4): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1009 (Default-IPC-NioEventLoopGroup-7-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 882 (RS-EventLoopGroup-4-5): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 745 (RS-EventLoopGroup-1-5): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 744 (Default-IPC-NioEventLoopGroup-7-2): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 743 (RS-EventLoopGroup-4-4): State: RUNNABLE Blocked count: 6 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 742 (Default-IPC-NioEventLoopGroup-7-1): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 737 (region-location-1): State: WAITING Blocked count: 3 Waited count: 7 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 736 (region-location-0): State: WAITING Blocked count: 1 Waited count: 3 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 732 (RS-EventLoopGroup-3-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 731 (RS-EventLoopGroup-5-32): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 725 (RS-EventLoopGroup-3-2): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 724 (RS-EventLoopGroup-5-31): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 701 (RS-EventLoopGroup-4-3): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 700 (RS-EventLoopGroup-5-30): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 695 (RS-EventLoopGroup-5-28): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 694 (RS-EventLoopGroup-5-29): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 691 (RS-EventLoopGroup-5-27): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 687 (RS-EventLoopGroup-5-26): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 686 (RS-EventLoopGroup-5-25): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 684 (RS-EventLoopGroup-5-24): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 682 (RS-EventLoopGroup-4-2): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 681 (RS-EventLoopGroup-5-23): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 675 (RS-EventLoopGroup-5-22): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 672 (RS-EventLoopGroup-5-16): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 666 (RS-EventLoopGroup-5-14): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 674 (RS-EventLoopGroup-5-15): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 673 (RS-EventLoopGroup-5-17): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 671 (RS-EventLoopGroup-5-18): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 670 (RS-EventLoopGroup-5-20): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 669 (RS-EventLoopGroup-5-21): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 667 (RS-EventLoopGroup-5-19): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 650 (RS-EventLoopGroup-5-11): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 647 (RS-EventLoopGroup-5-13): State: RUNNABLE Blocked count: 9 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 646 (RS-EventLoopGroup-5-12): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 645 (RS-EventLoopGroup-5-10): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 644 (RS-EventLoopGroup-5-9): State: RUNNABLE Blocked count: 3 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 643 (RS-EventLoopGroup-5-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 642 (RS-EventLoopGroup-5-7): State: RUNNABLE Blocked count: 7 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 641 (RS-EventLoopGroup-5-5): State: RUNNABLE Blocked count: 1 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 640 (RS-EventLoopGroup-5-6): State: RUNNABLE Blocked count: 5 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 626 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 624 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 622 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 621 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 602 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7a4b6fa6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 604 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@35397d65 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 603 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@294176ac Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 581 (RegionServerTracker-0): State: WAITING Blocked count: 7 Waited count: 8 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@31bfbac5 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 580 (master/asf910:0:becomeActiveMaster-HFileCleaner.small.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@38e47ecb Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:550) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:234) Thread 579 (master/asf910:0:becomeActiveMaster-HFileCleaner.large.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61dd31d9 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:219) Thread 578 (snapshot-hfile-cleaner-cache-refresher): State: TIMED_WAITING Blocked count: 6 Waited count: 16 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 576 (OldWALsCleaner-1): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 575 (OldWALsCleaner-0): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 574 (master/asf910:0:becomeActiveMaster-EventThread): State: WAITING Blocked count: 0 Waited count: 2 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5df7a6e3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 573 (master/asf910:0:becomeActiveMaster-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 527 (PEWorker-1): State: BLOCKED Blocked count: 10 Waited count: 89 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isTakingSnapshot(SnapshotManager.java:423) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.prepareSplitRegion(SplitTableRegionProcedure.java:470) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:244) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:97) org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189) org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039) Thread 572 (threadDeathWatcher-6-1): State: TIMED_WAITING Blocked count: 0 Waited count: 676 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 571 (RS-EventLoopGroup-1-4): State: RUNNABLE Blocked count: 47 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 570 (RS-EventLoopGroup-1-3): State: RUNNABLE Blocked count: 36 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 569 (RS-EventLoopGroup-1-2): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 523 (RpcClient-timer-pool1-t1): State: TIMED_WAITING Blocked count: 0 Waited count: 67556 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459) java.lang.Thread.run(Thread.java:748) Thread 568 (RS-EventLoopGroup-5-3): State: RUNNABLE Blocked count: 34 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 567 (RS-EventLoopGroup-5-4): State: RUNNABLE Blocked count: 33 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 566 (RS-EventLoopGroup-5-2): State: RUNNABLE Blocked count: 39 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 564 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 83 Waited count: 83 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 565 (ResponseProcessor for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847) Thread 563 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 33 Waited count: 31 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 562 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[]): State: WAITING Blocked count: 173 Waited count: 174 Waiting on java.util.LinkedList@6be3c98f Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) java.lang.Thread.run(Thread.java:748) Thread 561 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 560 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 559 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 5 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 544 (DataStreamer for file /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/MasterProcWALs/pv2-00000000000000000001.log block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: TIMED_WAITING Blocked count: 307 Waited count: 330 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 525 (WALProcedureStoreSyncThread): State: TIMED_WAITING Blocked count: 307 Waited count: 509 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:822) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:111) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:313) Thread 524 (Idle-Rpc-Conn-Sweeper-pool2-t1): State: WAITING Blocked count: 0 Waited count: 46 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@57e62326 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 519 (Thread-186): State: TIMED_WAITING Blocked count: 0 Waited count: 677 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 517 (master/asf910:0.splitLogManager..Chore.1): State: WAITING Blocked count: 0 Waited count: 499 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4440678d Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 489 (org.apache.hadoop.hdfs.PeerCache@688f09e2): State: TIMED_WAITING Blocked count: 0 Waited count: 226 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:255) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:748) Thread 485 (Monitor thread for TaskMonitor): State: TIMED_WAITING Blocked count: 0 Waited count: 68 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) java.lang.Thread.run(Thread.java:748) Thread 423 (M:0;asf910:53736): State: TIMED_WAITING Blocked count: 6 Waited count: 5682 Stack: java.lang.Object.wait(Native Method) java.lang.Thread.join(Thread.java:1260) org.apache.hadoop.hbase.procedure2.StoppableThread.awaitTermination(StoppableThread.java:42) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.join(ProcedureExecutor.java:697) org.apache.hadoop.hbase.master.HMaster.stopProcedureExecutor(HMaster.java:1470) org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1413) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1133) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:595) java.lang.Thread.run(Thread.java:748) Thread 466 (RS-EventLoopGroup-5-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 447 (RS-EventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 425 (RS-EventLoopGroup-3-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 422 (RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@724e7839 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 421 (RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5ff22b89 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 420 (RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@69da2841 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 419 (RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 3 Waiting on java.util.concurrent.Semaphore$NonfairSync@427563ce Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 418 (RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@43fb5409 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 417 (RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@d9b919c Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 416 (RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@35bb384e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 415 (RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5c394624 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 414 (RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@45fb1ab6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 413 (RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736): State: BLOCKED Blocked count: 70 Waited count: 1713 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736): State: TIMED_WAITING Blocked count: 60 Waited count: 359 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.tryAcquire(LockManager.java:162) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.acquire(LockManager.java:123) org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.prepare(TakeSnapshotHandler.java:141) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:60) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:46) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotTable(SnapshotManager.java:524) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:510) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:633) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 411 (RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736): State: BLOCKED Blocked count: 50 Waited count: 1102 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 410 (RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736): State: BLOCKED Blocked count: 40 Waited count: 2534 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 409 (RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736): State: BLOCKED Blocked count: 12 Waited count: 2935 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 408 (Time-limited test-EventThread): State: WAITING Blocked count: 15 Waited count: 29 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@225b2ff1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 407 (Time-limited test-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 9 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 404 (RS-EventLoopGroup-1-1): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 403 (HBase-Metrics2-1): State: TIMED_WAITING Blocked count: 0 Waited count: 400 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 394 (LeaseRenewer:jenkins@localhost:45471): State: TIMED_WAITING Blocked count: 21 Waited count: 723 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) java.lang.Thread.run(Thread.java:748) Thread 391 (ProcessThread(sid:0 cport:64381):): State: WAITING Blocked count: 0 Waited count: 2090 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4eede27 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122) Thread 390 (SyncThread:0): State: WAITING Blocked count: 3 Waited count: 2005 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7530c34 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127) Thread 389 (SessionTracker): State: TIMED_WAITING Blocked count: 0 Waited count: 341 Stack: java.lang.Object.wait(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146) Thread 388 (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:64381): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) java.lang.Thread.run(Thread.java:748) Thread 387 (java.util.concurrent.ThreadPoolExecutor$Worker@ee3a971[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 382 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 381 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 376 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@6f2e2192): State: TIMED_WAITING Blocked count: 0 Waited count: 12 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 375 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 374 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 373 (java.util.concurrent.ThreadPoolExecutor$Worker@3f568709[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 368 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 367 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 366 (java.util.concurrent.ThreadPoolExecutor$Worker@507b9b35[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 361 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 360 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 2 Waited count: 4 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 350 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@199cccae): State: TIMED_WAITING Blocked count: 0 Waited count: 12 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 349 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@28232ce3): State: TIMED_WAITING Blocked count: 0 Waited count: 12 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 348 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2)): State: TIMED_WAITING Blocked count: 17 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 347 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4)): State: TIMED_WAITING Blocked count: 16 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 346 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 345 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 335 (IPC Server handler 9 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 682 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 334 (IPC Server handler 8 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 688 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 333 (IPC Server handler 7 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 692 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 332 (IPC Server handler 6 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 694 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 331 (IPC Server handler 5 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 691 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 330 (IPC Server handler 4 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 690 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 329 (IPC Server handler 3 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 690 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 328 (IPC Server handler 2 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 685 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 327 (IPC Server handler 1 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 687 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 326 (IPC Server handler 0 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 684 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 321 (IPC Server listener on 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 324 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 251 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6dda7de8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 325 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 337 Waited count: 818 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 323 (IPC Server idle connection scanner for port 33303): State: TIMED_WAITING Blocked count: 1 Waited count: 70 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 322 (Socket Reader #1 for port 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 320 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@717546f2): State: TIMED_WAITING Blocked count: 0 Waited count: 1364 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 256 (nioEventLoopGroup-6-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 255 (Timer-3): State: TIMED_WAITING Blocked count: 0 Waited count: 23 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 254 (1921255371@qtp-2074985929-1): State: TIMED_WAITING Blocked count: 0 Waited count: 12 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 253 (1142869926@qtp-2074985929-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46872): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 252 (pool-7-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 246 (IPC Server handler 9 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 692 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 245 (IPC Server handler 8 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 688 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) 2018-12-04 21:00:16,913 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 0.42sec; sending interrupt java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 244 (IPC Server handler 7 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 693 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 243 (IPC Server handler 6 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 690 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 242 (IPC Server handler 5 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 684 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 241 (IPC Server handler 4 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 683 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 240 (IPC Server handler 3 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 686 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 239 (IPC Server handler 2 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 690 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 238 (IPC Server handler 1 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 684 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 237 (IPC Server handler 0 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 688 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 232 (IPC Server listener on 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 235 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 160 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3d7bc726): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 236 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 364 Waited count: 815 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 234 (IPC Server idle connection scanner for port 59129): State: TIMED_WAITING Blocked count: 1 Waited count: 70 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 233 (Socket Reader #1 for port 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 231 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@70916a98): State: TIMED_WAITING Blocked count: 0 Waited count: 1365 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 167 (nioEventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 166 (Timer-2): State: TIMED_WAITING Blocked count: 0 Waited count: 23 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 164 (IPC Client (291152797) connection to localhost/127.0.0.1:45471 from jenkins): State: TIMED_WAITING Blocked count: 749 Waited count: 747 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) Thread 163 (251275394@qtp-414562224-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:57500): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 162 (212430440@qtp-414562224-0): State: TIMED_WAITING Blocked count: 0 Waited count: 12 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 161 (pool-6-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 155 (IPC Server handler 9 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 690 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 154 (IPC Server handler 8 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 690 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 153 (IPC Server handler 7 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 700 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 152 (IPC Server handler 6 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 702 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 151 (IPC Server handler 5 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 695 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 150 (IPC Server handler 4 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 702 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 149 (IPC Server handler 3 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 699 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 148 (IPC Server handler 2 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 696 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 147 (IPC Server handler 1 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 705 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 146 (IPC Server handler 0 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 695 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 141 (IPC Server listener on 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 144 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 70 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2a3ed352): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 145 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471): State: RUNNABLE Blocked count: 363 Waited count: 812 Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.ipc.Client.call(Client.java:1467) org.apache.hadoop.ipc.Client.call(Client.java:1413) org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) com.sun.proxy.$Proxy32.sendHeartbeat(Unknown Source) org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:152) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:402) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:500) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 143 (IPC Server idle connection scanner for port 33361): State: TIMED_WAITING Blocked count: 1 Waited count: 70 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 142 (Socket Reader #1 for port 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 140 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@32a2fdb6): State: TIMED_WAITING Blocked count: 0 Waited count: 1366 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 75 (nioEventLoopGroup-2-1): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 74 (Timer-1): State: TIMED_WAITING Blocked count: 0 Waited count: 23 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 73 (1473617757@qtp-1074412217-1): State: TIMED_WAITING Blocked count: 0 Waited count: 12 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 72 (944216257@qtp-1074412217-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34827): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 71 (pool-4-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 65 (CacheReplicationMonitor(727192365)): State: TIMED_WAITING Blocked count: 0 Waited count: 24 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Thread 64 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@306f96e5): State: TIMED_WAITING Blocked count: 1 Waited count: 4 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:4739) java.lang.Thread.run(Thread.java:748) Thread 63 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@13e751be): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4656) java.lang.Thread.run(Thread.java:748) Thread 62 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5b83ab23): State: TIMED_WAITING Blocked count: 0 Waited count: 137 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4612) java.lang.Thread.run(Thread.java:748) Thread 61 (org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@7e547db5): State: TIMED_WAITING Blocked count: 0 Waited count: 343 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:431) java.lang.Thread.run(Thread.java:748) Thread 60 (IPC Server handler 9 on 45471): State: TIMED_WAITING Blocked count: 16 Waited count: 835 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 59 (IPC Server handler 8 on 45471): State: TIMED_WAITING Blocked count: 13 Waited count: 834 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 58 (IPC Server handler 7 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 833 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 57 (IPC Server handler 6 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 843 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 56 (IPC Server handler 5 on 45471): State: TIMED_WAITING Blocked count: 12 Waited count: 846 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 55 (IPC Server handler 4 on 45471): State: TIMED_WAITING Blocked count: 8 Waited count: 833 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 54 (IPC Server handler 3 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 834 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 53 (IPC Server handler 2 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 833 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 52 (IPC Server handler 1 on 45471): State: TIMED_WAITING Blocked count: 21 Waited count: 846 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 51 (IPC Server handler 0 on 45471): State: TIMED_WAITING Blocked count: 35 Waited count: 843 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 41 (IPC Server listener on 45471): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 44 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 38 (Block report processor): State: WAITING Blocked count: 6 Waited count: 105 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1c8f1d10 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:3860) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:3849) Thread 37 (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652): State: TIMED_WAITING Blocked count: 1 Waited count: 229 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3635) java.lang.Thread.run(Thread.java:748) Thread 39 (org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6918b9d): State: TIMED_WAITING Blocked count: 0 Waited count: 137 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:401) java.lang.Thread.run(Thread.java:748) Thread 50 (DecommissionMonitor-0): State: TIMED_WAITING Blocked count: 0 Waited count: 229 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 49 (org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@2e9478fa): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221) java.lang.Thread.run(Thread.java:748) Thread 45 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@6008e15b): State: TIMED_WAITING Blocked count: 0 Waited count: 1368 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 43 (IPC Server idle connection scanner for port 45471): State: TIMED_WAITING Blocked count: 1 Waited count: 70 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 42 (Socket Reader #1 for port 45471): State: RUNNABLE Blocked count: 2 Waited count: 3 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 36 (Timer-0): State: TIMED_WAITING Blocked count: 0 Waited count: 23 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 35 (2067905146@qtp-1695695008-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54312): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 34 (1265201788@qtp-1695695008-0): State: TIMED_WAITING Blocked count: 0 Waited count: 12 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 33 (pool-2-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 24 (org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner): State: WAITING Blocked count: 1 Waited count: 2 Waiting on java.lang.ref.ReferenceQueue$Lock@fcd6d08 Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3060) java.lang.Thread.run(Thread.java:748) Thread 23 (Time-limited test): State: RUNNABLE Blocked count: 281 Waited count: 475 Stack: sun.management.ThreadImpl.getThreadInfo1(Native Method) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:178) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139) org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:168) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.hadoop.hbase.util.Threads$PrintThreadInfoLazyHolder$1.printThreadInfo(Threads.java:294) org.apache.hadoop.hbase.util.Threads.printThreadInfo(Threads.java:341) org.apache.hadoop.hbase.util.Threads.threadDumpingIsAlive(Threads.java:135) org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:400) org.apache.hadoop.hbase.MiniHBaseCluster.waitUntilShutDown(MiniHBaseCluster.java:861) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:1123) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1105) org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDownAfterClass(RestoreSnapshotFromClientTestBase.java:73) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) Thread 19 (surefire-forkedjvm-ping-30s): State: TIMED_WAITING Blocked count: 686 Waited count: 1354 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 18 (surefire-forkedjvm-command-thread): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:255) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readInt(DataInputStream.java:387) org.apache.maven.surefire.booter.MasterProcessCommand.decode(MasterProcessCommand.java:115) org.apache.maven.surefire.booter.CommandReader$CommandRunnable.run(CommandReader.java:391) java.lang.Thread.run(Thread.java:748) Thread 4 (Signal Dispatcher): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: Thread 3 (Finalizer): State: WAITING Blocked count: 19 Waited count: 10 Waiting on java.lang.ref.ReferenceQueue$Lock@39047c6c Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216) Thread 2 (Reference Handler): State: WAITING Blocked count: 9 Waited count: 7 Waiting on java.lang.ref.Reference$Lock@5dc5e369 Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) java.lang.ref.Reference.tryHandlePending(Reference.java:191) java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) Thread 1 (main): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.FutureTask.awaitDone(FutureTask.java:426) java.util.concurrent.FutureTask.get(FutureTask.java:204) org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:141) org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:127) org.junit.rules.RunRules.evaluate(RunRules.java:20) org.junit.runners.ParentRunner.run(ParentRunner.java:363) org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 2018-12-04 21:00:18,915 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 2.421sec; sending interrupt 2018-12-04 21:00:20,916 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 4.423sec; sending interrupt 2018-12-04 21:00:22,917 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 6.424sec; sending interrupt 2018-12-04 21:00:24,919 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 8.425sec; sending interrupt 2018-12-04 21:00:26,920 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 10.427sec; sending interrupt 2018-12-04 21:00:28,921 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 12.428sec; sending interrupt 2018-12-04 21:00:30,923 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 14.429sec; sending interrupt 2018-12-04 21:00:32,924 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 16.431sec; sending interrupt 2018-12-04 21:00:34,927 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 18.434sec; sending interrupt 2018-12-04 21:00:36,929 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 20.436sec; sending interrupt 2018-12-04 21:00:38,934 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 22.441sec; sending interrupt 2018-12-04 21:00:40,938 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 24.445sec; sending interrupt 2018-12-04 21:00:42,942 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 26.449sec; sending interrupt 2018-12-04 21:00:44,943 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 28.45sec; sending interrupt 2018-12-04 21:00:46,944 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 30.451sec; sending interrupt 2018-12-04 21:00:48,946 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 32.453sec; sending interrupt 2018-12-04 21:00:50,947 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 34.454sec; sending interrupt 2018-12-04 21:00:52,948 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 36.455sec; sending interrupt 2018-12-04 21:00:54,949 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 38.456sec; sending interrupt 2018-12-04 21:00:56,950 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 40.457sec; sending interrupt 2018-12-04 21:00:58,952 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 42.458sec; sending interrupt 2018-12-04 21:01:00,953 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 44.46sec; sending interrupt 2018-12-04 21:01:02,954 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 46.461sec; sending interrupt 2018-12-04 21:01:04,955 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 48.462sec; sending interrupt 2018-12-04 21:01:06,957 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 50.464sec; sending interrupt 2018-12-04 21:01:08,958 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 52.465sec; sending interrupt 2018-12-04 21:01:10,959 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 54.466sec; sending interrupt 2018-12-04 21:01:12,960 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 56.467sec; sending interrupt 2018-12-04 21:01:14,961 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 3mins, 58.468sec; sending interrupt Process Thread Dump: Automatic Stack Trace every 60 seconds waiting on M:0;asf910:53736 239 active threads Thread 1508 (Timer for 'HBase' metrics system): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 1402 (process reaper): State: TIMED_WAITING Blocked count: 4 Waited count: 242 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1353 (RS-EventLoopGroup-4-12): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1342 (RS-EventLoopGroup-4-11): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1320 (RS-EventLoopGroup-4-10): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) 2018-12-04 21:01:16,967 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 0.474sec; sending interrupt org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1308 (RS-EventLoopGroup-4-9): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1300 (IPC Parameter Sending Thread #3): State: TIMED_WAITING Blocked count: 0 Waited count: 836 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 1285 (RS-EventLoopGroup-4-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1272 (RS-EventLoopGroup-4-7): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1120 (RS-EventLoopGroup-4-6): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1011 (RS-EventLoopGroup-3-4): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1010 (Default-IPC-NioEventLoopGroup-7-4): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 1009 (Default-IPC-NioEventLoopGroup-7-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 882 (RS-EventLoopGroup-4-5): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 745 (RS-EventLoopGroup-1-5): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 744 (Default-IPC-NioEventLoopGroup-7-2): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 743 (RS-EventLoopGroup-4-4): State: RUNNABLE Blocked count: 6 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 742 (Default-IPC-NioEventLoopGroup-7-1): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 737 (region-location-1): State: WAITING Blocked count: 3 Waited count: 7 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 736 (region-location-0): State: WAITING Blocked count: 1 Waited count: 3 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b9b0617 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 732 (RS-EventLoopGroup-3-3): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 731 (RS-EventLoopGroup-5-32): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 725 (RS-EventLoopGroup-3-2): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 724 (RS-EventLoopGroup-5-31): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 701 (RS-EventLoopGroup-4-3): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 700 (RS-EventLoopGroup-5-30): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 695 (RS-EventLoopGroup-5-28): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 694 (RS-EventLoopGroup-5-29): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 691 (RS-EventLoopGroup-5-27): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 687 (RS-EventLoopGroup-5-26): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 686 (RS-EventLoopGroup-5-25): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 684 (RS-EventLoopGroup-5-24): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 682 (RS-EventLoopGroup-4-2): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 681 (RS-EventLoopGroup-5-23): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 675 (RS-EventLoopGroup-5-22): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 672 (RS-EventLoopGroup-5-16): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 666 (RS-EventLoopGroup-5-14): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 674 (RS-EventLoopGroup-5-15): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 673 (RS-EventLoopGroup-5-17): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 671 (RS-EventLoopGroup-5-18): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 670 (RS-EventLoopGroup-5-20): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 669 (RS-EventLoopGroup-5-21): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 667 (RS-EventLoopGroup-5-19): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 650 (RS-EventLoopGroup-5-11): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 647 (RS-EventLoopGroup-5-13): State: RUNNABLE Blocked count: 9 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 646 (RS-EventLoopGroup-5-12): State: RUNNABLE Blocked count: 1 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 645 (RS-EventLoopGroup-5-10): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 644 (RS-EventLoopGroup-5-9): State: RUNNABLE Blocked count: 3 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 643 (RS-EventLoopGroup-5-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 642 (RS-EventLoopGroup-5-7): State: RUNNABLE Blocked count: 7 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 641 (RS-EventLoopGroup-5-5): State: RUNNABLE Blocked count: 1 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 640 (RS-EventLoopGroup-5-6): State: RUNNABLE Blocked count: 5 Waited count: 2 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 626 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 624 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 622 (RS:1;asf910:51486-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 621 (RS:2;asf910:36011-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 602 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7a4b6fa6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 604 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@35397d65 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 603 (regionserver/asf910:0.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@294176ac Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 581 (RegionServerTracker-0): State: WAITING Blocked count: 7 Waited count: 8 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@31bfbac5 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 580 (master/asf910:0:becomeActiveMaster-HFileCleaner.small.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@38e47ecb Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:550) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:234) Thread 579 (master/asf910:0:becomeActiveMaster-HFileCleaner.large.0-1543956541242): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@61dd31d9 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:219) Thread 578 (snapshot-hfile-cleaner-cache-refresher): State: TIMED_WAITING Blocked count: 6 Waited count: 16 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 576 (OldWALsCleaner-1): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 575 (OldWALsCleaner-0): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@552c0666 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 574 (master/asf910:0:becomeActiveMaster-EventThread): State: WAITING Blocked count: 0 Waited count: 2 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5df7a6e3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 573 (master/asf910:0:becomeActiveMaster-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 527 (PEWorker-1): State: BLOCKED Blocked count: 10 Waited count: 89 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isTakingSnapshot(SnapshotManager.java:423) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.prepareSplitRegion(SplitTableRegionProcedure.java:470) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:244) org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:97) org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189) org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039) Thread 572 (threadDeathWatcher-6-1): State: TIMED_WAITING Blocked count: 0 Waited count: 736 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 571 (RS-EventLoopGroup-1-4): State: RUNNABLE Blocked count: 47 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 570 (RS-EventLoopGroup-1-3): State: RUNNABLE Blocked count: 36 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 569 (RS-EventLoopGroup-1-2): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 523 (RpcClient-timer-pool1-t1): State: TIMED_WAITING Blocked count: 0 Waited count: 73571 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459) java.lang.Thread.run(Thread.java:748) Thread 568 (RS-EventLoopGroup-5-3): State: RUNNABLE Blocked count: 34 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 567 (RS-EventLoopGroup-5-4): State: RUNNABLE Blocked count: 33 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 566 (RS-EventLoopGroup-5-2): State: RUNNABLE Blocked count: 39 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 564 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 83 Waited count: 83 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 565 (ResponseProcessor for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847) Thread 563 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE): State: RUNNABLE Blocked count: 33 Waited count: 31 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) java.lang.Thread.run(Thread.java:748) Thread 562 (PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[]): State: WAITING Blocked count: 175 Waited count: 176 Waiting on java.util.LinkedList@6be3c98f Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) java.lang.Thread.run(Thread.java:748) Thread 561 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 560 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 4 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 559 (DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]): State: RUNNABLE Blocked count: 5 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) Thread 544 (DataStreamer for file /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/MasterProcWALs/pv2-00000000000000000001.log block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005): State: TIMED_WAITING Blocked count: 307 Waited count: 332 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 525 (WALProcedureStoreSyncThread): State: TIMED_WAITING Blocked count: 307 Waited count: 509 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:822) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:111) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:313) Thread 524 (Idle-Rpc-Conn-Sweeper-pool2-t1): State: WAITING Blocked count: 0 Waited count: 46 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@57e62326 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 519 (Thread-186): State: TIMED_WAITING Blocked count: 0 Waited count: 737 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) Thread 517 (master/asf910:0.splitLogManager..Chore.1): State: WAITING Blocked count: 0 Waited count: 499 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4440678d Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 489 (org.apache.hadoop.hdfs.PeerCache@688f09e2): State: TIMED_WAITING Blocked count: 0 Waited count: 246 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:255) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:748) Thread 485 (Monitor thread for TaskMonitor): State: TIMED_WAITING Blocked count: 0 Waited count: 74 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) java.lang.Thread.run(Thread.java:748) Thread 423 (M:0;asf910:53736): State: TIMED_WAITING Blocked count: 6 Waited count: 5923 Stack: java.lang.Object.wait(Native Method) java.lang.Thread.join(Thread.java:1260) org.apache.hadoop.hbase.procedure2.StoppableThread.awaitTermination(StoppableThread.java:42) org.apache.hadoop.hbase.procedure2.ProcedureExecutor.join(ProcedureExecutor.java:697) org.apache.hadoop.hbase.master.HMaster.stopProcedureExecutor(HMaster.java:1470) org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1413) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1133) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:595) java.lang.Thread.run(Thread.java:748) Thread 466 (RS-EventLoopGroup-5-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 447 (RS-EventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 425 (RS-EventLoopGroup-3-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 422 (RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@724e7839 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 421 (RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5ff22b89 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 420 (RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@69da2841 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 419 (RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 3 Waiting on java.util.concurrent.Semaphore$NonfairSync@427563ce Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 418 (RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@43fb5409 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 417 (RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@d9b919c Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 416 (RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@35bb384e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 415 (RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5c394624 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 414 (RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=53736): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@45fb1ab6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 413 (RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736): State: BLOCKED Blocked count: 70 Waited count: 1713 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736): State: TIMED_WAITING Blocked count: 60 Waited count: 359 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.tryAcquire(LockManager.java:162) org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.acquire(LockManager.java:123) org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.prepare(TakeSnapshotHandler.java:141) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:60) org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:46) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotTable(SnapshotManager.java:524) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:510) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:633) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 411 (RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736): State: BLOCKED Blocked count: 50 Waited count: 1102 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 410 (RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736): State: BLOCKED Blocked count: 40 Waited count: 2534 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 409 (RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736): State: BLOCKED Blocked count: 12 Waited count: 2935 Blocked on org.apache.hadoop.hbase.master.snapshot.SnapshotManager@51c5c8d5 Blocked by 412 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736) Stack: org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Thread 408 (Time-limited test-EventThread): State: WAITING Blocked count: 15 Waited count: 29 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@225b2ff1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Thread 407 (Time-limited test-SendThread(localhost:64381)): State: RUNNABLE Blocked count: 9 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Thread 404 (RS-EventLoopGroup-1-1): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Thread 403 (HBase-Metrics2-1): State: TIMED_WAITING Blocked count: 0 Waited count: 428 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 394 (LeaseRenewer:jenkins@localhost:45471): State: TIMED_WAITING Blocked count: 23 Waited count: 788 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) java.lang.Thread.run(Thread.java:748) Thread 391 (ProcessThread(sid:0 cport:64381):): State: WAITING Blocked count: 0 Waited count: 2100 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4eede27 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122) Thread 390 (SyncThread:0): State: WAITING Blocked count: 3 Waited count: 2015 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7530c34 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127) Thread 389 (SessionTracker): State: TIMED_WAITING Blocked count: 0 Waited count: 371 Stack: java.lang.Object.wait(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146) Thread 388 (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:64381): State: RUNNABLE Blocked count: 29 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) java.lang.Thread.run(Thread.java:748) Thread 387 (java.util.concurrent.ThreadPoolExecutor$Worker@ee3a971[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 382 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 381 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 376 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@6f2e2192): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 375 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 374 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5)): State: TIMED_WAITING Blocked count: 1 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 373 (java.util.concurrent.ThreadPoolExecutor$Worker@3f568709[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 368 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 367 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 366 (java.util.concurrent.ThreadPoolExecutor$Worker@507b9b35[State = -1, empty queue]): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 361 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 360 (refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/current/BP-2082010496-67.195.81.154-1543956529943): State: TIMED_WAITING Blocked count: 2 Waited count: 4 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) java.lang.Thread.run(Thread.java:748) Thread 350 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@199cccae): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 349 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@28232ce3): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) java.lang.Thread.run(Thread.java:748) Thread 348 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2)): State: TIMED_WAITING Blocked count: 17 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 347 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4)): State: TIMED_WAITING Blocked count: 16 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 346 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 345 (VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1)): State: TIMED_WAITING Blocked count: 21 Waited count: 2 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) Thread 335 (IPC Server handler 9 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 742 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 334 (IPC Server handler 8 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 749 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 333 (IPC Server handler 7 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 753 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 332 (IPC Server handler 6 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 756 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 331 (IPC Server handler 5 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 751 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 330 (IPC Server handler 4 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 750 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 329 (IPC Server handler 3 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 754 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 328 (IPC Server handler 2 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 745 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 327 (IPC Server handler 1 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 747 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 326 (IPC Server handler 0 on 33303): State: TIMED_WAITING Blocked count: 0 Waited count: 745 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 321 (IPC Server listener on 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 324 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 251 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6dda7de8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 325 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 358 Waited count: 881 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 323 (IPC Server idle connection scanner for port 33303): State: TIMED_WAITING Blocked count: 1 Waited count: 76 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 322 (Socket Reader #1 for port 33303): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 320 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@717546f2): State: TIMED_WAITING Blocked count: 0 Waited count: 1484 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 256 (nioEventLoopGroup-6-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 255 (Timer-3): State: TIMED_WAITING Blocked count: 0 Waited count: 25 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 254 (1921255371@qtp-2074985929-1): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 253 (1142869926@qtp-2074985929-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46872): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 252 (pool-7-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 246 (IPC Server handler 9 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 752 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 245 (IPC Server handler 8 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 748 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 244 (IPC Server handler 7 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 753 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 243 (IPC Server handler 6 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 750 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 242 (IPC Server handler 5 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 744 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 241 (IPC Server handler 4 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 743 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 240 (IPC Server handler 3 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 746 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 239 (IPC Server handler 2 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 750 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 238 (IPC Server handler 1 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 744 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 237 (IPC Server handler 0 on 59129): State: TIMED_WAITING Blocked count: 0 Waited count: 748 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 232 (IPC Server listener on 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 235 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 160 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3d7bc726): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 236 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 384 Waited count: 875 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 234 (IPC Server idle connection scanner for port 59129): State: TIMED_WAITING Blocked count: 1 Waited count: 76 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 233 (Socket Reader #1 for port 59129): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 231 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@70916a98): State: TIMED_WAITING Blocked count: 0 Waited count: 1486 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 167 (nioEventLoopGroup-4-1): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 166 (Timer-2): State: TIMED_WAITING Blocked count: 0 Waited count: 25 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 164 (IPC Client (291152797) connection to localhost/127.0.0.1:45471 from jenkins): State: TIMED_WAITING Blocked count: 811 Waited count: 809 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) Thread 163 (251275394@qtp-414562224-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:57500): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 162 (212430440@qtp-414562224-0): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 161 (pool-6-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 155 (IPC Server handler 9 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 750 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 154 (IPC Server handler 8 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 750 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 153 (IPC Server handler 7 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 760 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 152 (IPC Server handler 6 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 762 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 151 (IPC Server handler 5 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 755 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 150 (IPC Server handler 4 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 762 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 149 (IPC Server handler 3 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 761 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 148 (IPC Server handler 2 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 756 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 147 (IPC Server handler 1 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 765 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 146 (IPC Server handler 0 on 33361): State: TIMED_WAITING Blocked count: 0 Waited count: 755 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 141 (IPC Server listener on 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 144 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 70 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2a3ed352): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) java.lang.Thread.run(Thread.java:748) Thread 145 (DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471): State: TIMED_WAITING Blocked count: 383 Waited count: 873 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) java.lang.Thread.run(Thread.java:748) Thread 143 (IPC Server idle connection scanner for port 33361): State: TIMED_WAITING Blocked count: 1 Waited count: 76 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 142 (Socket Reader #1 for port 33361): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 140 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@32a2fdb6): State: TIMED_WAITING Blocked count: 0 Waited count: 1487 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 75 (nioEventLoopGroup-2-1): State: RUNNABLE Blocked count: 2 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) java.lang.Thread.run(Thread.java:748) Thread 74 (Timer-1): State: TIMED_WAITING Blocked count: 0 Waited count: 25 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 73 (1473617757@qtp-1074412217-1): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 72 (944216257@qtp-1074412217-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34827): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 71 (pool-4-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 65 (CacheReplicationMonitor(727192365)): State: TIMED_WAITING Blocked count: 0 Waited count: 26 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Thread 64 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@306f96e5): State: TIMED_WAITING Blocked count: 1 Waited count: 4 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:4739) java.lang.Thread.run(Thread.java:748) Thread 63 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@13e751be): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4656) java.lang.Thread.run(Thread.java:748) Thread 62 (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5b83ab23): State: TIMED_WAITING Blocked count: 0 Waited count: 149 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4612) java.lang.Thread.run(Thread.java:748) Thread 61 (org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@7e547db5): State: TIMED_WAITING Blocked count: 0 Waited count: 374 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:431) java.lang.Thread.run(Thread.java:748) Thread 60 (IPC Server handler 9 on 45471): State: TIMED_WAITING Blocked count: 16 Waited count: 896 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 59 (IPC Server handler 8 on 45471): State: TIMED_WAITING Blocked count: 13 Waited count: 894 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 58 (IPC Server handler 7 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 893 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 57 (IPC Server handler 6 on 45471): State: TIMED_WAITING Blocked count: 14 Waited count: 903 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 56 (IPC Server handler 5 on 45471): State: TIMED_WAITING Blocked count: 12 Waited count: 907 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 55 (IPC Server handler 4 on 45471): State: TIMED_WAITING Blocked count: 8 Waited count: 894 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 54 (IPC Server handler 3 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 895 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 53 (IPC Server handler 2 on 45471): State: TIMED_WAITING Blocked count: 10 Waited count: 893 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 52 (IPC Server handler 1 on 45471): State: TIMED_WAITING Blocked count: 21 Waited count: 906 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 51 (IPC Server handler 0 on 45471): State: TIMED_WAITING Blocked count: 35 Waited count: 903 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) Thread 41 (IPC Server listener on 45471): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) Thread 44 (IPC Server Responder): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) Thread 38 (Block report processor): State: WAITING Blocked count: 6 Waited count: 105 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1c8f1d10 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:3860) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:3849) Thread 37 (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652): State: TIMED_WAITING Blocked count: 1 Waited count: 249 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3635) java.lang.Thread.run(Thread.java:748) Thread 39 (org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6918b9d): State: TIMED_WAITING Blocked count: 0 Waited count: 149 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:401) java.lang.Thread.run(Thread.java:748) Thread 50 (DecommissionMonitor-0): State: TIMED_WAITING Blocked count: 0 Waited count: 249 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 49 (org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@2e9478fa): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221) java.lang.Thread.run(Thread.java:748) Thread 45 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@6008e15b): State: TIMED_WAITING Blocked count: 0 Waited count: 1489 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) java.lang.Thread.run(Thread.java:748) Thread 43 (IPC Server idle connection scanner for port 45471): State: TIMED_WAITING Blocked count: 1 Waited count: 76 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 42 (Socket Reader #1 for port 45471): State: RUNNABLE Blocked count: 2 Waited count: 3 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) Thread 36 (Timer-0): State: TIMED_WAITING Blocked count: 0 Waited count: 25 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 35 (2067905146@qtp-1695695008-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54312): State: RUNNABLE Blocked count: 1 Waited count: 1 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Thread 34 (1265201788@qtp-1695695008-0): State: TIMED_WAITING Blocked count: 0 Waited count: 13 Stack: java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Thread 33 (pool-2-thread-1): State: TIMED_WAITING Blocked count: 0 Waited count: 1 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 24 (org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner): State: WAITING Blocked count: 1 Waited count: 2 Waiting on java.lang.ref.ReferenceQueue$Lock@fcd6d08 Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3060) java.lang.Thread.run(Thread.java:748) Thread 23 (Time-limited test): State: RUNNABLE Blocked count: 281 Waited count: 476 Stack: sun.management.ThreadImpl.getThreadInfo1(Native Method) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:178) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139) org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:168) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.hadoop.hbase.util.Threads$PrintThreadInfoLazyHolder$1.printThreadInfo(Threads.java:294) org.apache.hadoop.hbase.util.Threads.printThreadInfo(Threads.java:341) org.apache.hadoop.hbase.util.Threads.threadDumpingIsAlive(Threads.java:135) org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:400) org.apache.hadoop.hbase.MiniHBaseCluster.waitUntilShutDown(MiniHBaseCluster.java:861) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:1123) org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1105) org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDownAfterClass(RestoreSnapshotFromClientTestBase.java:73) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) Thread 19 (surefire-forkedjvm-ping-30s): State: TIMED_WAITING Blocked count: 743 Waited count: 1473 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 18 (surefire-forkedjvm-command-thread): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:255) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readInt(DataInputStream.java:387) org.apache.maven.surefire.booter.MasterProcessCommand.decode(MasterProcessCommand.java:115) org.apache.maven.surefire.booter.CommandReader$CommandRunnable.run(CommandReader.java:391) java.lang.Thread.run(Thread.java:748) Thread 4 (Signal Dispatcher): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: Thread 3 (Finalizer): State: WAITING Blocked count: 19 Waited count: 10 Waiting on java.lang.ref.ReferenceQueue$Lock@39047c6c Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216) Thread 2 (Reference Handler): State: WAITING Blocked count: 9 Waited count: 7 Waiting on java.lang.ref.Reference$Lock@5dc5e369 Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) java.lang.ref.Reference.tryHandlePending(Reference.java:191) java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) Thread 1 (main): State: TIMED_WAITING Blocked count: 1 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.FutureTask.awaitDone(FutureTask.java:426) java.util.concurrent.FutureTask.get(FutureTask.java:204) org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:141) org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:127) org.junit.rules.RunRules.evaluate(RunRules.java:20) org.junit.runners.ParentRunner.run(ParentRunner.java:363) org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 2018-12-04 21:01:18,970 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 2.477sec; sending interrupt 2018-12-04 21:01:20,971 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 4.478sec; sending interrupt 2018-12-04 21:01:22,972 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 6.479sec; sending interrupt 2018-12-04 21:01:24,973 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 8.48sec; sending interrupt 2018-12-04 21:01:26,974 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 10.481sec; sending interrupt 2018-12-04 21:01:28,976 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 12.483sec; sending interrupt 2018-12-04 21:01:30,977 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 14.484sec; sending interrupt 2018-12-04 21:01:32,978 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 16.485sec; sending interrupt 2018-12-04 21:01:34,979 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 18.486sec; sending interrupt 2018-12-04 21:01:36,980 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 20.487sec; sending interrupt 2018-12-04 21:01:38,981 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 22.488sec; sending interrupt 2018-12-04 21:01:40,983 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 24.489sec; sending interrupt 2018-12-04 21:01:42,984 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 26.491sec; sending interrupt 2018-12-04 21:01:44,985 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 28.492sec; sending interrupt 2018-12-04 21:01:46,986 WARN [M:0;asf910:53736] procedure2.StoppableThread(45): Waiting termination of thread PEWorker-1, 4mins, 30.493sec; sending interrupt 2018-12-04 21:01:48,334 DEBUG [Time-limited test] hbase.LocalHBaseCluster(402): Interrupted java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1260) at org.apache.hadoop.hbase.util.Threads.threadDumpingIsAlive(Threads.java:133) at org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:400) at org.apache.hadoop.hbase.MiniHBaseCluster.waitUntilShutDown(MiniHBaseCluster.java:861) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:1123) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1105) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDownAfterClass(RestoreSnapshotFromClientTestBase.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) 2018-12-04 21:01:48,335 WARN [Time-limited test] datanode.DirectoryScanner(529): DirectoryScanner: shutdown has been called 2018-12-04 21:01:48,345 WARN [ResponseProcessor for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005] hdfs.DFSOutputStream$DataStreamer$ResponseProcessor(942): DFSOutputStream ResponseProcessor exception for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2294) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847) 2018-12-04 21:01:48,346 WARN [DataStreamer for file /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/MasterProcWALs/pv2-00000000000000000001.log block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005] hdfs.DFSOutputStream$DataStreamer(1234): Error Recovery for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005 in pipeline DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK], DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK], DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:60454,DS-5f235008-470b-44c0-8f58-8abc282f11fb,DISK]) is bad. ====> TEST TIMED OUT. PRINTING THREAD DUMP. <==== Timestamp: 2018-12-04 09:01:48,335 "NIOServerCxn.Factory:0.0.0.0/0.0.0.0:64381" daemon prio=5 tid=388 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 5 on 33361" daemon prio=5 tid=151 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=53736" daemon prio=5 tid=419 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=53736" daemon prio=5 tid=418 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "Socket Reader #1 for port 33303" daemon prio=5 tid=322 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "RS-EventLoopGroup-4-5" daemon prio=10 tid=882 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 2 on 45471" daemon prio=5 tid=53 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-5-13" daemon prio=10 tid=647 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-21" daemon prio=10 tid=669 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 4 on 33361" daemon prio=5 tid=150 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "HBase-Metrics2-1" daemon prio=5 tid=403 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "Timer-3" daemon prio=5 tid=255 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "IPC Server handler 3 on 59129" daemon prio=5 tid=240 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "process reaper" daemon prio=10 tid=1402 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "M:0;asf910:53736" daemon prio=5 tid=423 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1260) at org.apache.hadoop.hbase.procedure2.StoppableThread.awaitTermination(StoppableThread.java:42) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.join(ProcedureExecutor.java:697) at org.apache.hadoop.hbase.master.HMaster.stopProcedureExecutor(HMaster.java:1470) at org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1413) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1133) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:595) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@717546f2" daemon prio=5 tid=320 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "IPC Client (291152797) connection to localhost/127.0.0.1:45471 from jenkins" daemon prio=5 tid=164 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:934) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979) "DecommissionMonitor-0" daemon prio=5 tid=50 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "master/asf910:0.splitLogManager..Chore.1" daemon prio=5 tid=517 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "Timer-0" daemon prio=5 tid=36 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "IPC Server handler 7 on 59129" daemon prio=5 tid=244 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-5-18" daemon prio=10 tid=671 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=53736" daemon prio=5 tid=409 blocked java.lang.Thread.State: BLOCKED at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) at org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/current/BP-2082010496-67.195.81.154-1543956529943" daemon prio=5 tid=382 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "Default-IPC-NioEventLoopGroup-7-2" daemon prio=10 tid=744 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Finalizer" daemon prio=8 tid=3 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216) "IPC Server handler 5 on 33303" daemon prio=5 tid=331 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-5-29" daemon prio=10 tid=694 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-1-5" daemon prio=10 tid=745 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server idle connection scanner for port 45471" daemon prio=5 tid=43 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]" daemon prio=5 tid=561 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-17" daemon prio=10 tid=673 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "master/asf910:0:becomeActiveMaster-SendThread(localhost:64381)" daemon prio=5 tid=573 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "IPC Server handler 0 on 33361" daemon prio=5 tid=146 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@7e547db5" daemon prio=5 tid=61 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:431) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE" daemon prio=5 tid=563 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) at java.lang.Thread.run(Thread.java:748) "Timer-2" daemon prio=5 tid=166 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "Default-IPC-NioEventLoopGroup-7-1" daemon prio=10 tid=742 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-16" daemon prio=10 tid=672 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Timer for 'HBase' metrics system" daemon prio=5 tid=1508 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "IPC Server Responder" daemon prio=5 tid=44 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "IPC Server handler 1 on 45471" daemon prio=5 tid=52 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-1-3" daemon prio=10 tid=570 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-26" daemon prio=10 tid=687 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@70916a98" daemon prio=5 tid=231 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-2" daemon prio=10 tid=725 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "surefire-forkedjvm-ping-30s" daemon prio=5 tid=19 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-30" daemon prio=10 tid=700 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=53736" daemon prio=5 tid=420 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-5-4" daemon prio=10 tid=567 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "CacheReplicationMonitor(727192365)" daemon prio=5 tid=65 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) "OldWALsCleaner-1" daemon prio=5 tid=576 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) at org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "251275394@qtp-414562224-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:57500" daemon prio=5 tid=163 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "nioEventLoopGroup-2-1" prio=10 tid=75 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-4" daemon prio=10 tid=1011 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 8 on 45471" daemon prio=5 tid=59 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 8 on 33361" daemon prio=5 tid=154 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5b83ab23" daemon prio=5 tid=62 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4612) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 4 on 59129" daemon prio=5 tid=241 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "DataStreamer for file /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/MasterProcWALs/pv2-00000000000000000001.log block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005" daemon prio=5 tid=544 runnable java.lang.Thread.State: RUNNABLE at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) "IPC Server handler 3 on 45471" daemon prio=5 tid=54 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE" daemon prio=5 tid=564 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1291) at java.lang.Thread.run(Thread.java:748) "944216257@qtp-1074412217-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34827" daemon prio=5 tid=72 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "java.util.concurrent.ThreadPoolExecutor$Worker@507b9b35[State = -1, empty queue]" daemon prio=5 tid=366 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "1473617757@qtp-1074412217-1" daemon prio=5 tid=73 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "Socket Reader #1 for port 45471" daemon prio=5 tid=42 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "RS-EventLoopGroup-5-24" daemon prio=10 tid=684 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471" daemon prio=5 tid=145 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 tid=144 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "IPC Server handler 9 on 33361" daemon prio=5 tid=155 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "Time-limited test" daemon prio=5 tid=23 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.fillInStackTrace(Throwable.java:783) at java.lang.Throwable.(Throwable.java:250) at org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253) at org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555) at org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528) at org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65) at org.apache.log4j.PatternLayout.format(PatternLayout.java:506) at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310) at org.apache.log4j.WriterAppender.append(WriterAppender.java:162) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.apache.commons.logging.impl.Log4JLogger.warn(Log4JLogger.java:197) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.shutdown(DirectoryScanner.java:529) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownDirectoryScanner(DataNode.java:892) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownPeriodicScanners(DataNode.java:863) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1709) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1754) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1729) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1713) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:765) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1106) at org.apache.hadoop.hbase.client.RestoreSnapshotFromClientTestBase.tearDownAfterClass(RestoreSnapshotFromClientTestBase.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) "regionserver/asf910:0.procedureResultReporter" daemon prio=5 tid=604 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) "RS-EventLoopGroup-5-23" daemon prio=10 tid=681 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Parameter Sending Thread #3" daemon prio=5 tid=1300 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-3" daemon prio=10 tid=568 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "main" prio=5 tid=1 runnable java.lang.Thread.State: RUNNABLE at java.lang.Thread.dumpThreads(Native Method) at java.lang.Thread.getAllStackTraces(Thread.java:1610) at org.apache.hadoop.hbase.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:88) at org.apache.hadoop.hbase.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:74) at org.apache.hadoop.hbase.TimedOutTestsListener.testFailure(TimedOutTestsListener.java:62) at org.junit.runner.notification.SynchronizedRunListener.testFailure(SynchronizedRunListener.java:63) at org.junit.runner.notification.RunNotifier$4.notifyListener(RunNotifier.java:142) at org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) at org.junit.runner.notification.RunNotifier.fireTestFailures(RunNotifier.java:138) at org.junit.runner.notification.RunNotifier.fireTestFailure(RunNotifier.java:132) at org.apache.maven.surefire.common.junit4.Notifier.fireTestFailure(Notifier.java:114) at org.junit.internal.runners.model.EachTestNotifier.addFailure(EachTestNotifier.java:23) at org.junit.internal.runners.model.EachTestNotifier.addMultipleFailureException(EachTestNotifier.java:29) at org.junit.internal.runners.model.EachTestNotifier.addFailure(EachTestNotifier.java:21) at org.junit.runners.ParentRunner.run(ParentRunner.java:369) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) "RS-EventLoopGroup-4-6" daemon prio=10 tid=1120 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "region-location-1" daemon prio=5 tid=737 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3)" daemon prio=5 tid=346 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6)" daemon prio=5 tid=375 runnable java.lang.Thread.State: RUNNABLE at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) "IPC Server handler 4 on 45471" daemon prio=5 tid=55 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 1 on 59129" daemon prio=5 tid=238 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "Default-IPC-NioEventLoopGroup-7-3" daemon prio=10 tid=1009 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "PacketResponder: BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[]" daemon prio=5 tid=562 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1238) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1309) at java.lang.Thread.run(Thread.java:748) "refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/current/BP-2082010496-67.195.81.154-1543956529943" daemon prio=5 tid=367 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471" daemon prio=5 tid=325 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 8 on 59129" daemon prio=5 tid=245 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "Socket Reader #1 for port 59129" daemon prio=5 tid=233 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@28232ce3" daemon prio=5 tid=349 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) at java.lang.Thread.run(Thread.java:748) "refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/current/BP-2082010496-67.195.81.154-1543956529943" daemon prio=5 tid=381 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "Default-IPC-NioEventLoopGroup-7-4" daemon prio=10 tid=1010 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "212430440@qtp-414562224-0" daemon prio=5 tid=162 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "IPC Server Responder" daemon prio=5 tid=324 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "IPC Server listener on 33361" daemon prio=5 tid=141 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "WALProcedureStoreSyncThread" daemon prio=5 tid=525 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:822) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:111) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:313) "IPC Server handler 9 on 33303" daemon prio=5 tid=335 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-4-3" daemon prio=10 tid=701 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 0 on 45471" daemon prio=5 tid=51 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=53736" daemon prio=5 tid=412 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.tryAcquire(LockManager.java:162) at org.apache.hadoop.hbase.master.locking.LockManager$MasterLock.acquire(LockManager.java:123) at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.prepare(TakeSnapshotHandler.java:141) at org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:60) at org.apache.hadoop.hbase.master.snapshot.EnabledTableSnapshotHandler.prepare(EnabledTableSnapshotHandler.java:46) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotTable(SnapshotManager.java:524) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:510) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:633) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) at org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "pool-7-thread-1" prio=5 tid=252 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-7" daemon prio=10 tid=642 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "pool-6-thread-1" prio=5 tid=161 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server listener on 45471" daemon prio=5 tid=41 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "Block report processor" daemon prio=5 tid=38 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:3860) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:3849) "ResponseProcessor for block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005" daemon prio=5 tid=565 terminated java.lang.Thread.State: TERMINATED at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847) "IPC Server handler 6 on 33361" daemon prio=5 tid=152 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=53736" daemon prio=5 tid=421 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@199cccae" daemon prio=5 tid=350 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-1-2" daemon prio=10 tid=569 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-20" daemon prio=10 tid=670 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=578 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=53736" daemon prio=5 tid=417 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-1-1" daemon prio=10 tid=404 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=53736" daemon prio=5 tid=416 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/current/BP-2082010496-67.195.81.154-1543956529943" daemon prio=5 tid=368 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-4-12" daemon prio=10 tid=1353 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 0 on 59129" daemon prio=5 tid=237 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-5-10" daemon prio=10 tid=645 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Idle-Rpc-Conn-Sweeper-pool2-t1" daemon prio=5 tid=524 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 2 on 33303" daemon prio=5 tid=328 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@6008e15b" daemon prio=5 tid=45 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-4-10" daemon prio=10 tid=1320 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-27" daemon prio=10 tid=691 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1)" daemon prio=5 tid=345 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) "RS-EventLoopGroup-5-6" daemon prio=10 tid=640 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "master/asf910:0:becomeActiveMaster-EventThread" daemon prio=5 tid=574 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "Monitor thread for TaskMonitor" daemon prio=5 tid=485 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) at java.lang.Thread.run(Thread.java:748) "surefire-forkedjvm-command-thread" daemon prio=5 tid=18 runnable java.lang.Thread.State: RUNNABLE at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:255) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.maven.surefire.booter.MasterProcessCommand.decode(MasterProcessCommand.java:115) at org.apache.maven.surefire.booter.CommandReader$CommandRunnable.run(CommandReader.java:391) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.PeerCache@688f09e2" daemon prio=5 tid=489 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:255) at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) at java.lang.Thread.run(Thread.java:748) "RegionServerTracker-0" daemon prio=5 tid=581 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "LeaseRenewer:jenkins@localhost:45471" daemon prio=5 tid=394 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:444) at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) at java.lang.Thread.run(Thread.java:748) "nioEventLoopGroup-4-1" prio=10 tid=167 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-4-4" daemon prio=10 tid=743 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-8" daemon prio=10 tid=643 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server listener on 33303" daemon prio=5 tid=321 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "PEWorker-1" daemon prio=5 tid=527 blocked java.lang.Thread.State: BLOCKED at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isTakingSnapshot(SnapshotManager.java:423) at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.prepareSplitRegion(SplitTableRegionProcedure.java:470) at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:244) at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:97) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039) "1265201788@qtp-1695695008-0" daemon prio=5 tid=34 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "master/asf910:0:becomeActiveMaster-HFileCleaner.small.0-1543956541242" daemon prio=5 tid=580 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:550) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:234) "IPC Server idle connection scanner for port 59129" daemon prio=5 tid=234 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "RS-EventLoopGroup-4-9" daemon prio=10 tid=1308 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "pool-4-thread-1" prio=5 tid=71 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-1-4" daemon prio=10 tid=571 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 4 on 33303" daemon prio=5 tid=330 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server listener on 59129" daemon prio=5 tid=232 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:807) "org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3d7bc726" daemon prio=5 tid=160 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) at java.lang.Thread.run(Thread.java:748) "2067905146@qtp-1695695008-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54312" daemon prio=5 tid=35 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "RS:1;asf910:51486-MemStoreChunkPool Statistics" daemon prio=5 tid=626 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "threadDeathWatcher-6-1" daemon prio=1 tid=572 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hbase.thirdparty.io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 5 on 45471" daemon prio=5 tid=56 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-4-7" daemon prio=10 tid=1272 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-3" daemon prio=10 tid=732 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "pool-2-thread-1" prio=5 tid=33 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 6 on 33303" daemon prio=5 tid=332 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 0 on 33303" daemon prio=5 tid=326 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]" daemon prio=5 tid=559 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 3 on 33303" daemon prio=5 tid=329 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "1921255371@qtp-2074985929-1" daemon prio=5 tid=254 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) "RS-EventLoopGroup-5-14" daemon prio=10 tid=666 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "java.util.concurrent.ThreadPoolExecutor$Worker@3f568709[State = -1, empty queue]" daemon prio=5 tid=373 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 5 on 59129" daemon prio=5 tid=242 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@306f96e5" daemon prio=5 tid=64 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:4739) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-9" daemon prio=10 tid=644 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "OldWALsCleaner-0" daemon prio=5 tid=575 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) at org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) at org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$129/764299119.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 7 on 33303" daemon prio=5 tid=333 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2)" daemon prio=5 tid=348 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) "org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@2e9478fa" daemon prio=5 tid=49 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221) at java.lang.Thread.run(Thread.java:748) "Signal Dispatcher" daemon prio=9 tid=4 runnable java.lang.Thread.State: RUNNABLE "RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=53736" daemon prio=5 tid=415 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-5-19" daemon prio=10 tid=667 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Timer-1" daemon prio=5 tid=74 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "org.apache.hadoop.util.JvmPauseMonitor$Monitor@32a2fdb6" daemon prio=5 tid=140 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:182) at java.lang.Thread.run(Thread.java:748) "regionserver/asf910:0.procedureResultReporter" daemon prio=5 tid=603 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) "IPC Server handler 6 on 59129" daemon prio=5 tid=243 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "Reference Handler" daemon prio=10 tid=2 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at java.lang.ref.Reference.tryHandlePending(Reference.java:191) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) "RS-EventLoopGroup-4-11" daemon prio=10 tid=1342 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS:2;asf910:36011-MemStoreChunkPool Statistics" daemon prio=5 tid=624 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-11" daemon prio=10 tid=650 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server idle connection scanner for port 33303" daemon prio=5 tid=323 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "IPC Server handler 9 on 59129" daemon prio=5 tid=246 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "Socket Reader #1 for port 33361" daemon prio=5 tid=142 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:745) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724) "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@13e751be" daemon prio=5 tid=63 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4656) at java.lang.Thread.run(Thread.java:748) "refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/current/BP-2082010496-67.195.81.154-1543956529943" daemon prio=5 tid=360 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "RpcClient-timer-pool1-t1" daemon prio=5 tid=523 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 7 on 33361" daemon prio=5 tid=153 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-5-15" daemon prio=10 tid=674 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 2 on 33361" daemon prio=5 tid=148 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-4-8" daemon prio=10 tid=1285 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=53736" daemon prio=5 tid=413 blocked java.lang.Thread.State: BLOCKED at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) at org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-5-5" daemon prio=10 tid=641 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 9 on 45471" daemon prio=5 tid=60 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "refreshUsed-/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/current/BP-2082010496-67.195.81.154-1543956529943" daemon prio=5 tid=361 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:132) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-4-1" daemon prio=10 tid=447 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-22" daemon prio=10 tid=675 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 tid=235 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:982) at org.apache.hadoop.ipc.Server$Responder.run(Server.java:965) "DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471" daemon prio=5 tid=236 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:542) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:659) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6dda7de8" daemon prio=5 tid=251 terminated java.lang.Thread.State: TERMINATED at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:205) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:257) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) at java.lang.Thread.run(Thread.java:748) "master/asf910:0:becomeActiveMaster-HFileCleaner.large.0-1543956541242" daemon prio=5 tid=579 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) at org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:219) "RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=53736" daemon prio=5 tid=410 blocked java.lang.Thread.State: BLOCKED at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) at org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RS-EventLoopGroup-5-1" daemon prio=10 tid=466 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS:1;asf910:51486-MemStoreChunkPool Statistics" daemon prio=5 tid=622 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-32" daemon prio=10 tid=731 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 3 on 33361" daemon prio=5 tid=149 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server handler 6 on 45471" daemon prio=5 tid=57 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=53736" daemon prio=5 tid=414 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server handler 8 on 33303" daemon prio=5 tid=334 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "1142869926@qtp-2074985929-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46872" daemon prio=5 tid=253 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) "nioEventLoopGroup-6-1" prio=10 tid=256 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2a3ed352" daemon prio=5 tid=70 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:146) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 1 on 33361" daemon prio=5 tid=147 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-5-2" daemon prio=10 tid=566 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Time-limited test-SendThread(localhost:64381)" daemon prio=5 tid=407 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) "org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@53ff652" daemon prio=5 tid=37 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3635) at java.lang.Thread.run(Thread.java:748) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5)" daemon prio=5 tid=374 terminated java.lang.Thread.State: TERMINATED at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) "org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@6f2e2192" daemon prio=5 tid=376 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:3088) at java.lang.Thread.run(Thread.java:748) "IPC Server handler 1 on 33303" daemon prio=5 tid=327 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "IPC Server idle connection scanner for port 33361" daemon prio=5 tid=143 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "SyncThread:0" daemon prio=5 tid=390 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127) "org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6918b9d" daemon prio=5 tid=39 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:401) at java.lang.Thread.run(Thread.java:748) "regionserver/asf910:0.procedureResultReporter" daemon prio=5 tid=602 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) "RS-EventLoopGroup-5-12" daemon prio=10 tid=646 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "RS:2;asf910:36011-MemStoreChunkPool Statistics" daemon prio=5 tid=621 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "Time-limited test-EventThread" daemon prio=5 tid=408 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "RS-EventLoopGroup-5-28" daemon prio=10 tid=695 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=24 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3060) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-5-25" daemon prio=10 tid=686 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "Thread-186" daemon prio=5 tid=519 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:523) "SessionTracker" daemon prio=5 tid=389 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146) "IPC Server handler 2 on 59129" daemon prio=5 tid=239 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "ProcessThread(sid:0 cport:64381):" daemon prio=5 tid=391 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122) "java.util.concurrent.ThreadPoolExecutor$Worker@ee3a971[State = -1, empty queue]" daemon prio=5 tid=387 terminated java.lang.Thread.State: TERMINATED at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-4-2" daemon prio=10 tid=682 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]" daemon prio=5 tid=560 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) "RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=53736" daemon prio=5 tid=422 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=53736" daemon prio=5 tid=411 blocked java.lang.Thread.State: BLOCKED at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:986) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.cleanupSentinels(SnapshotManager.java:976) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshotInternal(SnapshotManager.java:587) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:570) at org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1502) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) "IPC Server handler 7 on 45471" daemon prio=5 tid=58 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2185) "RS-EventLoopGroup-5-31" daemon prio=10 tid=724 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "region-location-0" daemon prio=5 tid=736 in Object.wait() java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) "RS-EventLoopGroup-3-1" daemon prio=10 tid=425 runnable java.lang.Thread.State: RUNNABLE at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) "VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4)" daemon prio=5 tid=347 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628) 2018-12-04 21:01:48,382 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-12-04 21:01:48,472 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@3b25b806 2018-12-04 21:01:48,472 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-12-04 21:01:48,473 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@3b25b806 2018-12-04 21:01:48,473 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. 2018-12-04 21:01:48,473 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(112): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@3b25b806 2018-12-04 21:01:48,473 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(121): Starting fs shutdown hook thread. 2018-12-04 21:01:48,481 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:35696 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741874_1050]] datanode.DataXceiver(778): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/current, /home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/current]'}, localName='127.0.0.1:33680', datanodeUuid='b4f0e2e4-2a69-4998-9add-8ca52db3c08b', xmitsInProgress=0}:Exception transfering block BP-2082010496-67.195.81.154-1543956529943:blk_1073741874_1050 to mirror 127.0.0.1:60454: java.net.ConnectException: Connection refused 2018-12-04 21:01:48,483 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:48094 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741874_1050]] datanode.DataXceiver(280): 127.0.0.1:54375:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48094 dst: /127.0.0.1:54375 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) 2018-12-04 21:01:48,486 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471] datanode.IncrementalBlockReportManager(132): IncrementalBlockReportManager interrupted 2018-12-04 21:01:48,486 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:45471] datanode.BPServiceActor(670): Ending block pool service for: Block pool BP-2082010496-67.195.81.154-1543956529943 (Datanode Uuid 0555b898-ca9b-47ca-bb8b-6eb6c6427ac8) service to localhost/127.0.0.1:45471 2018-12-04 21:01:48,485 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:42895 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.DataXceiver(280): 127.0.0.1:60454:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42895 dst: /127.0.0.1:60454 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:200) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) 2018-12-04 21:01:48,483 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:35696 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741874_1050]] datanode.DataXceiver(280): 127.0.0.1:33680:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35696 dst: /127.0.0.1:33680 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:708) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) 2018-12-04 21:01:48,493 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:46192 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.DataXceiver(280): 127.0.0.1:54375:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46192 dst: /127.0.0.1:54375 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) 2018-12-04 21:01:48,495 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_595549873_23 at /127.0.0.1:33795 [Receiving block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005]] datanode.DataXceiver(280): 127.0.0.1:33680:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33795 dst: /127.0.0.1:33680 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:503) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:903) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) at java.lang.Thread.run(Thread.java:748) 2018-12-04 21:01:48,496 WARN [Time-limited test] datanode.DirectoryScanner(529): DirectoryScanner: shutdown has been called 2018-12-04 21:01:48,500 WARN [IPC Server handler 4 on 45471] blockmanagement.BlockPlacementPolicyDefault(385): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2018-12-04 21:01:48,501 WARN [IPC Server handler 4 on 45471] protocol.BlockStoragePolicy(160): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2018-12-04 21:01:48,501 WARN [IPC Server handler 4 on 45471] blockmanagement.BlockPlacementPolicyDefault(385): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2018-12-04 21:01:48,508 WARN [IPC Server handler 0 on 45471] blockmanagement.BlockPlacementPolicyDefault(385): Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2018-12-04 21:01:48,509 WARN [IPC Server handler 0 on 45471] protocol.BlockStoragePolicy(160): Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2018-12-04 21:01:48,509 WARN [IPC Server handler 0 on 45471] blockmanagement.BlockPlacementPolicyDefault(385): Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2018-12-04 21:01:48,509 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-12-04 21:01:48,516 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:33680 is added to blk_1073741876_1052{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5:NORMAL:127.0.0.1:33680|RBW]]} size 211 2018-12-04 21:01:48,516 INFO [Block report processor] blockmanagement.BlockManager(3145): BLOCK* addBlock: block blk_1073741874_1050 on node 127.0.0.1:33680 size 134217728 does not belong to any file 2018-12-04 21:01:48,517 INFO [Block report processor] blockmanagement.InvalidateBlocks(116): BLOCK* InvalidateBlocks: add blk_1073741874_1050 to 127.0.0.1:33680 2018-12-04 21:01:48,523 WARN [IPC Server handler 8 on 45471] blockmanagement.BlockPlacementPolicyDefault(385): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2018-12-04 21:01:48,523 WARN [IPC Server handler 8 on 45471] blockmanagement.BlockPlacementPolicyDefault(385): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2018-12-04 21:01:48,524 WARN [IPC Server handler 8 on 45471] protocol.BlockStoragePolicy(160): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2018-12-04 21:01:48,524 WARN [IPC Server handler 8 on 45471] blockmanagement.BlockPlacementPolicyDefault(385): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2018-12-04 21:01:48,528 WARN [DataStreamer for file /user/jenkins/test-data/851cf155-ca4b-8e04-5a3c-496add4cc960/MasterProcWALs/pv2-00000000000000000001.log block BP-2082010496-67.195.81.154-1543956529943:blk_1073741829_1005] hdfs.DFSOutputStream$DataStreamer(663): DataStreamer Exception java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK], DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK], DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1033) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1107) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1276) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:990) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:507) 2018-12-04 21:01:48,611 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471] datanode.IncrementalBlockReportManager(132): IncrementalBlockReportManager interrupted 2018-12-04 21:01:48,611 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:45471] datanode.BPServiceActor(670): Ending block pool service for: Block pool BP-2082010496-67.195.81.154-1543956529943 (Datanode Uuid 9f9ff2c2-85c2-40ae-982a-9bba1c8f4d95) service to localhost/127.0.0.1:45471 2018-12-04 21:01:48,623 WARN [Time-limited test] datanode.DirectoryScanner(529): DirectoryScanner: shutdown has been called 2018-12-04 21:01:48,634 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-12-04 21:01:48,735 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471] datanode.IncrementalBlockReportManager(132): IncrementalBlockReportManager interrupted 2018-12-04 21:01:48,736 WARN [DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Base-Flaky-Tests_branch-2.1-HDWEX3IYBHYNPLHYJ5SL2CJ5OVNB3YWMQ73HUX53YLI5I4AERGMA/hbase-server/target/test-data/da2d434a-80b9-3e9b-052b-5b78fd8259dc/cluster_cf4c8c73-74e6-60d0-3080-f3d2fca76131/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:45471] datanode.BPServiceActor(670): Ending block pool service for: Block pool BP-2082010496-67.195.81.154-1543956529943 (Datanode Uuid b4f0e2e4-2a69-4998-9add-8ca52db3c08b) service to localhost/127.0.0.1:45471 2018-12-04 21:01:48,771 INFO [Time-limited test] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2018-12-04 21:01:48,876 ERROR [Time-limited test] server.ZooKeeperServer(472): ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2018-12-04 21:01:48,878 INFO [Time-limited test] zookeeper.MiniZooKeeperCluster(326): Shutdown MiniZK cluster with all ZK servers 2018-12-04 21:01:48,923 ERROR [ClientFinalizer-shutdown-hook] hdfs.DFSClient(949): Failed to close inode 16418 java.io.EOFException: End of File Exception between local host is: "asf910.gq1.ygridcore.net/67.195.81.154"; destination host is: "localhost":45471; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy33.complete(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:462) at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy34.complete(Unknown Source) at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) at com.sun.proxy.$Proxy37.complete(Unknown Source) at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) at com.sun.proxy.$Proxy37.complete(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2520) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2497) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2462) at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:946) at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:978) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1076) at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2758) at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2775) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1085) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:980) 2018-12-04 21:01:48,925 ERROR [ClientFinalizer-shutdown-hook] hdfs.DFSClient(949): Failed to close inode 16419 java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK], DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54375,DS-e5e4b851-a625-4939-b76b-08e33db5384e,DISK], DatanodeInfoWithStorage[127.0.0.1:33680,DS-c70cf766-0f83-4b66-ae0c-573d6f929ed5,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1033) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1107) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1276) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:990) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:507) 2018-12-04 21:01:48,925 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished.