2018-10-08 18:10:30,622 INFO [Time-limited test] hbase.ResourceChecker(148): before: backup.TestIncrementalBackupWithBulkLoad#TestIncBackupDeleteTable Thread=8, OpenFileDescriptor=205, MaxFileDescriptor=32000, SystemLoadAverage=192, ProcessCount=366, AvailableMemoryMB=43235 2018-10-08 18:10:30,629 DEBUG [Time-limited test] impl.BackupManager(129): Added log cleaner: org.apache.hadoop.hbase.backup.master.BackupLogCleaner. Added master procedure manager: org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager.Added master procedure manager: org.apache.hadoop.hbase.backup.BackupHFileCleaner 2018-10-08 18:10:30,633 DEBUG [Time-limited test] impl.BackupManager(159): Added region procedure manager: org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager. Added region observer: org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:10:30,640 INFO [Time-limited test] hbase.HBaseTestingUtility(1030): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=1, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2018-10-08 18:10:30,641 INFO [Time-limited test] hbase.HBaseZKTestingUtility(85): Created new mini-cluster data directory: /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f, deleteOnExit=true 2018-10-08 18:10:30,642 INFO [Time-limited test] hbase.HBaseTestingUtility(1044): STARTING DFS 2018-10-08 18:10:30,643 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting test.cache.data to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cache_data in system properties and HBase conf 2018-10-08 18:10:30,643 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting hadoop.tmp.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_tmp in system properties and HBase conf 2018-10-08 18:10:30,644 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting hadoop.log.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs in system properties and HBase conf 2018-10-08 18:10:30,644 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting mapreduce.cluster.local.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_local in system properties and HBase conf 2018-10-08 18:10:30,645 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting mapreduce.cluster.temp.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_temp in system properties and HBase conf 2018-10-08 18:10:30,645 INFO [Time-limited test] hbase.HBaseTestingUtility(743): read short circuit is OFF 2018-10-08 18:10:30,784 WARN [Time-limited test] util.NativeCodeLoader(60): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-10-08 18:10:31,206 DEBUG [Time-limited test] fs.HFileSystem(317): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2018-10-08 18:10:32,818 INFO [Time-limited test] beanutils.FluentPropertyBeanIntrospector(147): Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property. 2018-10-08 18:10:32,837 WARN [Time-limited test] impl.MetricsConfig(134): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2018-10-08 18:10:32,972 INFO [Time-limited test] log.Log(192): Logging initialized @3366ms 2018-10-08 18:10:33,114 INFO [Time-limited test] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:10:33,152 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@61063b88{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:10:33,153 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@6c904a55{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-hdfs/3.1.1/hadoop-hdfs-3.1.1-tests.jar!/webapps/static,AVAILABLE} 2018-10-08 18:10:33,403 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@6a118103{/,file:///tmp/jetty-localhost-40592-hdfs-_-any-8685586505242990025.dir/webapp/,AVAILABLE}{/hdfs} 2018-10-08 18:10:33,411 INFO [Time-limited test] server.AbstractConnector(278): Started ServerConnector@b02cad7{HTTP/1.1,[http/1.1]}{localhost:40592} 2018-10-08 18:10:33,411 INFO [Time-limited test] server.Server(414): Started @3805ms 2018-10-08 18:10:34,881 INFO [Time-limited test] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:10:34,883 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@5a1c7b3e{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:10:34,884 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@525aa793{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-hdfs/3.1.1/hadoop-hdfs-3.1.1-tests.jar!/webapps/static,AVAILABLE} 2018-10-08 18:10:35,077 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@38b73a58{/,file:///tmp/jetty-localhost-40342-datanode-_-any-6632440785946480130.dir/webapp/,AVAILABLE}{/datanode} 2018-10-08 18:10:35,078 INFO [Time-limited test] server.AbstractConnector(278): Started ServerConnector@6cbd884e{HTTP/1.1,[http/1.1]}{localhost:40342} 2018-10-08 18:10:35,078 INFO [Time-limited test] server.Server(414): Started @5473ms 2018-10-08 18:10:36,778 INFO [Block report processor] blockmanagement.BlockManager(2526): BLOCK* processReport 0xcca133d84d015075: Processing first storage report for DS-0430b48e-0911-4297-8877-48cfe5842d70 from datanode debf5d4d-aa2b-4e98-a5e9-4756ba54407e 2018-10-08 18:10:36,780 INFO [Block report processor] blockmanagement.BlockManager(2555): BLOCK* processReport 0xcca133d84d015075: from storage DS-0430b48e-0911-4297-8877-48cfe5842d70 node DatanodeRegistration(127.0.0.1:32877, datanodeUuid=debf5d4d-aa2b-4e98-a5e9-4756ba54407e, infoPort=38332, infoSecurePort=0, ipcPort=39596, storageInfo=lv=-57;cid=testClusterID;nsid=1334478237;c=1539022232083), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2018-10-08 18:10:36,780 INFO [Block report processor] blockmanagement.BlockManager(2526): BLOCK* processReport 0xcca133d84d015075: Processing first storage report for DS-0bf7e020-2229-4287-9571-73d92bf4cdb1 from datanode debf5d4d-aa2b-4e98-a5e9-4756ba54407e 2018-10-08 18:10:36,781 INFO [Block report processor] blockmanagement.BlockManager(2555): BLOCK* processReport 0xcca133d84d015075: from storage DS-0bf7e020-2229-4287-9571-73d92bf4cdb1 node DatanodeRegistration(127.0.0.1:32877, datanodeUuid=debf5d4d-aa2b-4e98-a5e9-4756ba54407e, infoPort=38332, infoSecurePort=0, ipcPort=39596, storageInfo=lv=-57;cid=testClusterID;nsid=1334478237;c=1539022232083), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2018-10-08 18:10:36,828 DEBUG [Time-limited test] hbase.HBaseTestingUtility(678): Setting hbase.rootdir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db 2018-10-08 18:10:36,883 ERROR [Time-limited test] server.ZooKeeperServer(472): ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2018-10-08 18:10:36,896 INFO [Time-limited test] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran successful 'stat' on client port=54078 2018-10-08 18:10:36,908 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:10:36,911 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:10:37,629 INFO [Time-limited test] util.FSUtils(515): Created version file at hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 with version=8 2018-10-08 18:10:37,629 INFO [Time-limited test] hbase.HBaseTestingUtility(1344): Setting hbase.fs.tmp.dir to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase-staging 2018-10-08 18:10:37,658 DEBUG [Time-limited test] hbase.LocalHBaseCluster(146): Setting Master Port to random. 2018-10-08 18:10:37,658 DEBUG [Time-limited test] hbase.LocalHBaseCluster(151): Setting RegionServer Port to random. 2018-10-08 18:10:37,659 DEBUG [Time-limited test] hbase.LocalHBaseCluster(159): Setting RS InfoServer Port to random. 2018-10-08 18:10:37,659 DEBUG [Time-limited test] hbase.LocalHBaseCluster(165): Setting Master InfoServer Port to random. 2018-10-08 18:10:37,928 INFO [Time-limited test] metrics.MetricRegistriesLoader(66): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2018-10-08 18:10:38,599 INFO [Time-limited test] client.ConnectionUtils(122): master/cn012:0 server-side Connection retries=45 2018-10-08 18:10:38,643 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30 2018-10-08 18:10:38,646 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=200, handlerCount=20 2018-10-08 18:10:38,646 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-10-08 18:10:38,806 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientService, hbase.pb.AdminService 2018-10-08 18:10:38,823 DEBUG [Time-limited test] util.ClassSize(229): Using Unsafe to estimate memory layout 2018-10-08 18:10:38,902 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /172.18.128.12:42545 2018-10-08 18:10:38,915 INFO [Time-limited test] hfile.CacheConfig(553): Allocating onheap LruBlockCache size=995.60 MB, blockSize=64 KB 2018-10-08 18:10:38,921 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:38,922 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:38,925 DEBUG [Time-limited test] mob.MobFileCache(123): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2018-10-08 18:10:38,927 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:10:38,930 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:10:38,979 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=master:42545 connecting to ZooKeeper ensemble=localhost:54078 2018-10-08 18:10:39,102 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:425450x0, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-10-08 18:10:39,104 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): master:42545-0x16654dfacc40000 connected 2018-10-08 18:10:39,270 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2018-10-08 18:10:39,271 DEBUG [Time-limited test] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2018-10-08 18:10:39,286 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=30 with threadPrefix=default.FPBQ.Fifo, numCallQueues=3, port=42545 2018-10-08 18:10:39,288 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=20 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=2, port=42545 2018-10-08 18:10:39,289 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42545 2018-10-08 18:10:39,303 INFO [Time-limited test] http.HttpRequestLog(87): Http request log for http.requests.master is not defined 2018-10-08 18:10:39,305 INFO [Time-limited test] http.HttpServer(802): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2018-10-08 18:10:39,305 INFO [Time-limited test] http.HttpServer(802): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2018-10-08 18:10:39,307 INFO [Time-limited test] http.HttpServer(780): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2018-10-08 18:10:39,307 INFO [Time-limited test] http.HttpServer(787): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2018-10-08 18:10:39,307 INFO [Time-limited test] http.HttpServer(787): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2018-10-08 18:10:39,312 INFO [Time-limited test] http.HttpServer(1035): Jetty bound to port 36122 2018-10-08 18:10:39,312 INFO [Time-limited test] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:10:39,317 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@4a1802a8{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:10:39,317 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@5290e690{/static,jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2018-10-08 18:10:39,504 INFO [Time-limited test] webapp.StandardDescriptorProcessor(280): NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet 2018-10-08 18:10:39,523 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@16f5fbb7{/,file:///tmp/jetty-0.0.0.0-36122-master-_-any-1038849327072065670.dir/webapp/,AVAILABLE}{jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/master} 2018-10-08 18:10:39,524 INFO [Time-limited test] server.AbstractConnector(278): Started ServerConnector@abac572{HTTP/1.1,[http/1.1]}{0.0.0.0:36122} 2018-10-08 18:10:39,524 INFO [Time-limited test] server.Server(414): Started @9919ms 2018-10-08 18:10:39,530 INFO [Time-limited test] master.HMaster(504): hbase.rootdir=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9, hbase.cluster.distributed=false 2018-10-08 18:10:39,617 INFO [Time-limited test] client.ConnectionUtils(122): regionserver/cn012:0 server-side Connection retries=45 2018-10-08 18:10:39,618 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30 2018-10-08 18:10:39,618 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=200, handlerCount=20 2018-10-08 18:10:39,618 INFO [Time-limited test] ipc.RpcExecutor(148): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2018-10-08 18:10:39,623 INFO [Time-limited test] ipc.RpcServerFactory(65): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2018-10-08 18:10:39,625 INFO [Time-limited test] io.ByteBufferPool(83): Created with bufferSize=64 KB and maxPoolSize=1.88 KB 2018-10-08 18:10:39,628 INFO [Time-limited test] ipc.NettyRpcServer(110): Bind to /172.18.128.12:37486 2018-10-08 18:10:39,630 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:39,630 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:39,632 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:10:39,635 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:10:39,638 INFO [Time-limited test] zookeeper.RecoverableZooKeeper(106): Process identifier=regionserver:37486 connecting to ZooKeeper ensemble=localhost:54078 2018-10-08 18:10:39,651 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:374860x0, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-10-08 18:10:39,652 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(543): regionserver:37486-0x16654dfacc40001 connected 2018-10-08 18:10:39,652 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2018-10-08 18:10:39,653 DEBUG [Time-limited test] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2018-10-08 18:10:39,657 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=30 with threadPrefix=default.FPBQ.Fifo, numCallQueues=3, port=37486 2018-10-08 18:10:39,659 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=20 with threadPrefix=priority.FPBQ.Fifo, numCallQueues=2, port=37486 2018-10-08 18:10:39,660 DEBUG [Time-limited test] ipc.RpcExecutor(263): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37486 2018-10-08 18:10:39,660 INFO [Time-limited test] http.HttpRequestLog(87): Http request log for http.requests.regionserver is not defined 2018-10-08 18:10:39,661 INFO [Time-limited test] http.HttpServer(802): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2018-10-08 18:10:39,661 INFO [Time-limited test] http.HttpServer(802): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2018-10-08 18:10:39,662 INFO [Time-limited test] http.HttpServer(780): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2018-10-08 18:10:39,662 INFO [Time-limited test] http.HttpServer(787): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2018-10-08 18:10:39,663 INFO [Time-limited test] http.HttpServer(787): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2018-10-08 18:10:39,665 INFO [Time-limited test] http.HttpServer(1035): Jetty bound to port 43555 2018-10-08 18:10:39,665 INFO [Time-limited test] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:10:39,668 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@167c9561{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:10:39,668 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@5b4194e7{/static,jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2018-10-08 18:10:39,818 INFO [Time-limited test] webapp.StandardDescriptorProcessor(280): NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet 2018-10-08 18:10:39,823 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@29e0936b{/,file:///tmp/jetty-0.0.0.0-43555-regionserver-_-any-8826539078594343473.dir/webapp/,AVAILABLE}{jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/regionserver} 2018-10-08 18:10:39,825 INFO [Time-limited test] server.AbstractConnector(278): Started ServerConnector@12fde802{HTTP/1.1,[http/1.1]}{0.0.0.0:43555} 2018-10-08 18:10:39,825 INFO [Time-limited test] server.Server(414): Started @10220ms 2018-10-08 18:10:39,832 INFO [master/cn012:0:becomeActiveMaster] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:10:39,834 INFO [master/cn012:0:becomeActiveMaster] server.AbstractConnector(278): Started ServerConnector@4143618a{HTTP/1.1,[http/1.1]}{0.0.0.0:43964} 2018-10-08 18:10:39,834 INFO [master/cn012:0:becomeActiveMaster] server.Server(414): Started @10228ms 2018-10-08 18:10:39,834 INFO [master/cn012:0:becomeActiveMaster] master.HMaster(2274): Adding backup master ZNode /1/backup-masters/cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:39,868 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/backup-masters/cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:39,909 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/master 2018-10-08 18:10:39,909 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/master 2018-10-08 18:10:39,910 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/master 2018-10-08 18:10:39,911 INFO [master/cn012:0:becomeActiveMaster] master.ActiveMasterManager(172): Deleting ZNode for /1/backup-masters/cn012.l42scl.hortonworks.com,42545,1539022237747 from backup master directory 2018-10-08 18:10:39,913 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/master 2018-10-08 18:10:39,926 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/backup-masters/cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:39,927 WARN [master/cn012:0:becomeActiveMaster] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-10-08 18:10:39,927 INFO [master/cn012:0:becomeActiveMaster] master.ActiveMasterManager(181): Registered as active master=cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:39,932 INFO [master/cn012:0:becomeActiveMaster] regionserver.ChunkCreator(498): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 448, initial count 0 2018-10-08 18:10:39,933 INFO [master/cn012:0:becomeActiveMaster] regionserver.ChunkCreator(498): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 497, initial count 0 2018-10-08 18:10:40,449 DEBUG [master/cn012:0:becomeActiveMaster] util.FSUtils(667): Created cluster ID file at hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase.id with ID: a09eba57-8547-488b-bfcc-4ab1ccd8474f 2018-10-08 18:10:40,498 INFO [master/cn012:0:becomeActiveMaster] master.MasterFileSystem(393): BOOTSTRAP: creating hbase:meta region 2018-10-08 18:10:40,504 INFO [master/cn012:0:becomeActiveMaster] regionserver.HRegion(7043): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'false', BLOCKSIZE => '8192'}, {NAME => 'rep_barrier', VERSIONS => '2147483647', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'table', VERSIONS => '3', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 Table name == hbase:meta 2018-10-08 18:10:40,542 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(836): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:10:40,580 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/info 2018-10-08 18:10:40,600 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:40,613 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:10:40,631 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:10:40,635 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/rep_barrier 2018-10-08 18:10:40,636 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:40,636 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:10:40,637 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:10:40,641 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/table 2018-10-08 18:10:40,642 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:40,642 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:10:40,644 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:10:40,645 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(949): replaying wal for 1588230740 2018-10-08 18:10:40,655 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740 2018-10-08 18:10:40,656 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase/meta/1588230740 2018-10-08 18:10:40,656 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(957): stopping wal replay for 1588230740 2018-10-08 18:10:40,657 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(969): Cleaning up temporary data for 1588230740 2018-10-08 18:10:40,663 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(980): Cleaning up detritus for 1588230740 2018-10-08 18:10:40,669 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-10-08 18:10:40,671 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(1005): writing seq id for 1588230740 2018-10-08 18:10:40,677 DEBUG [master/cn012:0:becomeActiveMaster] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:10:40,677 INFO [master/cn012:0:becomeActiveMaster] regionserver.HRegion(1009): Opened 1588230740; next sequenceid=2 2018-10-08 18:10:40,677 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(1554): Closing 1588230740, disabling compactions & flushes 2018-10-08 18:10:40,677 DEBUG [master/cn012:0:becomeActiveMaster] regionserver.HRegion(1594): Updates disabled for region hbase:meta,,1.1588230740 2018-10-08 18:10:40,678 INFO [master/cn012:0:becomeActiveMaster] regionserver.HRegion(1711): Closed hbase:meta,,1.1588230740 2018-10-08 18:10:41,112 DEBUG [master/cn012:0:becomeActiveMaster] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2018-10-08 18:10:41,173 INFO [master/cn012:0:becomeActiveMaster] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:10:41,190 INFO [master/cn012:0:becomeActiveMaster] coordination.ZKSplitLogManagerCoordination(494): Found 0 orphan tasks and 0 rescan nodes 2018-10-08 18:10:41,253 INFO [master/cn012:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x42b80f47 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:10:41,299 DEBUG [master/cn012:0:becomeActiveMaster] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@14043551, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:10:41,338 INFO [master/cn012:0:becomeActiveMaster] procedure2.ProcedureExecutor(641): Starting 16 core workers (bigger of cpus/4 or 16) with max (burst) worker count=160 2018-10-08 18:10:41,346 DEBUG [master/cn012:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(892): Using builder API via reflection for DFS file creation. 2018-10-08 18:10:41,355 INFO [master/cn012:0:becomeActiveMaster] wal.WALProcedureStore(1107): Rolled new Procedure Store WAL, id=1 2018-10-08 18:10:41,356 INFO [master/cn012:0:becomeActiveMaster] procedure2.ProcedureExecutor(660): Recovered WALProcedureStore lease in 16msec 2018-10-08 18:10:41,358 INFO [master/cn012:0:becomeActiveMaster] procedure2.ProcedureExecutor(674): Loaded WALProcedureStore in 1msec 2018-10-08 18:10:41,358 INFO [master/cn012:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(97): Instantiated, coreThreads=128 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2018-10-08 18:10:41,397 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(614): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Unable to get data of znode /1/meta-region-server because node does not exist (not an error) 2018-10-08 18:10:41,409 INFO [master/cn012:0:becomeActiveMaster] master.RegionServerTracker(123): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2018-10-08 18:10:41,445 INFO [master/cn012:0:becomeActiveMaster] balancer.BaseLoadBalancer(1039): slop=0.001, tablesOnMaster=false, systemTablesOnMaster=false 2018-10-08 18:10:41,454 INFO [master/cn012:0:becomeActiveMaster] balancer.StochasticLoadBalancer(216): Loaded config; maxSteps=1000000, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, etc. 2018-10-08 18:10:41,472 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/balancer 2018-10-08 18:10:41,473 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/normalizer 2018-10-08 18:10:41,485 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/split 2018-10-08 18:10:41,488 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/merge 2018-10-08 18:10:41,526 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2018-10-08 18:10:41,526 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2018-10-08 18:10:41,527 INFO [master/cn012:0:becomeActiveMaster] master.HMaster(796): Active/primary master=cn012.l42scl.hortonworks.com,42545,1539022237747, sessionid=0x16654dfacc40000, setting cluster-up flag (Was=false) 2018-10-08 18:10:41,537 INFO [master/cn012:0:becomeActiveMaster] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager was loaded successfully. 2018-10-08 18:10:41,779 DEBUG [master/cn012:0:becomeActiveMaster] procedure.ZKProcedureUtil(272): Clearing all znodes /1/flush-table-proc/acquired, /1/flush-table-proc/reached, /1/flush-table-proc/abort 2018-10-08 18:10:41,782 DEBUG [master/cn012:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:42,080 DEBUG [master/cn012:0:becomeActiveMaster] procedure.ZKProcedureUtil(272): Clearing all znodes /1/rolllog-proc/acquired, /1/rolllog-proc/reached, /1/rolllog-proc/abort 2018-10-08 18:10:42,084 DEBUG [master/cn012:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:42,328 DEBUG [master/cn012:0:becomeActiveMaster] procedure.ZKProcedureUtil(272): Clearing all znodes /1/online-snapshot/acquired, /1/online-snapshot/reached, /1/online-snapshot/abort 2018-10-08 18:10:42,331 DEBUG [master/cn012:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(250): Starting controller for procedure member=cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:42,335 WARN [master/cn012:0:becomeActiveMaster] snapshot.SnapshotManager(283): Couldn't delete working snapshot directory: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp 2018-10-08 18:10:42,337 INFO [master/cn012:0:becomeActiveMaster] master.ServerManager(1112): No .lastflushedseqids found athdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.lastflushedseqids will record last flushed sequence id for regions by regionserver report all over again 2018-10-08 18:10:42,393 INFO [master/cn012:0:becomeActiveMaster] master.HMaster(1011): hbase:meta {1588230740 state=OFFLINE, ts=1539022241400, server=null} 2018-10-08 18:10:42,434 INFO [RS:0;cn012:37486] regionserver.HRegionServer(879): ClusterId : a09eba57-8547-488b-bfcc-4ab1ccd8474f 2018-10-08 18:10:42,438 INFO [RS:0;cn012:37486] procedure.ProcedureManagerHost(71): User procedure org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager was loaded successfully. 2018-10-08 18:10:42,442 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initializing 2018-10-08 18:10:42,463 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(47): Procedure flush-table-proc initialized 2018-10-08 18:10:42,463 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(45): Procedure backup-proc initializing 2018-10-08 18:10:42,469 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(47): Procedure backup-proc initialized 2018-10-08 18:10:42,469 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initializing 2018-10-08 18:10:42,522 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(47): Procedure online-snapshot initialized 2018-10-08 18:10:42,523 INFO [RS:0;cn012:37486] zookeeper.ReadOnlyZKClient(139): Connect 0x20c1e999 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:10:42,572 DEBUG [RS:0;cn012:37486] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66e4a5b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:10:42,573 DEBUG [RS:0;cn012:37486] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55f794d8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=cn012.l42scl.hortonworks.com/172.18.128.12:0 2018-10-08 18:10:42,579 DEBUG [RS:0;cn012:37486] regionserver.ShutdownHook(88): Installed shutdown hook thread: Shutdownhook:RS:0;cn012:37486 2018-10-08 18:10:42,584 INFO [RS:0;cn012:37486] regionserver.RegionServerCoprocessorHost(67): System coprocessor loading is enabled 2018-10-08 18:10:42,584 INFO [RS:0;cn012:37486] regionserver.RegionServerCoprocessorHost(68): Table coprocessor loading is enabled 2018-10-08 18:10:42,588 INFO [RS:0;cn012:37486] regionserver.HRegionServer(2613): reportForDuty to master=cn012.l42scl.hortonworks.com,42545,1539022237747 with port=37486, startcode=1539022239614 2018-10-08 18:10:42,708 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:34119, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2018-10-08 18:10:42,723 DEBUG [master/cn012:0:becomeActiveMaster] procedure2.ProcedureExecutor(1124): Stored pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, hasLock=false; InitMetaProcedure table=hbase:meta 2018-10-08 18:10:42,741 DEBUG [master/cn012:0:becomeActiveMaster] executor.ExecutorService(98): Starting executor service name=MASTER_OPEN_REGION-master/cn012:0, corePoolSize=5, maxPoolSize=5 2018-10-08 18:10:42,742 DEBUG [master/cn012:0:becomeActiveMaster] executor.ExecutorService(98): Starting executor service name=MASTER_CLOSE_REGION-master/cn012:0, corePoolSize=5, maxPoolSize=5 2018-10-08 18:10:42,742 DEBUG [master/cn012:0:becomeActiveMaster] executor.ExecutorService(98): Starting executor service name=MASTER_SERVER_OPERATIONS-master/cn012:0, corePoolSize=5, maxPoolSize=5 2018-10-08 18:10:42,742 DEBUG [master/cn012:0:becomeActiveMaster] executor.ExecutorService(98): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/cn012:0, corePoolSize=5, maxPoolSize=5 2018-10-08 18:10:42,742 DEBUG [master/cn012:0:becomeActiveMaster] executor.ExecutorService(98): Starting executor service name=M_LOG_REPLAY_OPS-master/cn012:0, corePoolSize=10, maxPoolSize=10 2018-10-08 18:10:42,742 DEBUG [master/cn012:0:becomeActiveMaster] executor.ExecutorService(98): Starting executor service name=MASTER_TABLE_OPERATIONS-master/cn012:0, corePoolSize=1, maxPoolSize=1 2018-10-08 18:10:42,745 INFO [master/cn012:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT, hasLock=false; org.apache.hadoop.hbase.procedure2.ProcedureExecutor$CompletedProcedureCleaner; timeout=30000, timestamp=1539022272745 2018-10-08 18:10:42,749 INFO [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore$DirScanPool(90): Cleaner pool size is 6 2018-10-08 18:10:42,750 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2018-10-08 18:10:42,751 INFO [master/cn012:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(106): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:54078 2018-10-08 18:10:42,752 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2018-10-08 18:10:42,763 DEBUG [RS:0;cn012:37486] regionserver.HRegionServer(2633): Master is not running yet 2018-10-08 18:10:42,763 WARN [RS:0;cn012:37486] regionserver.HRegionServer(955): reportForDuty failed; sleeping 3000 ms and then retrying. 2018-10-08 18:10:42,765 INFO [master/cn012:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7b9a705f to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:10:42,798 DEBUG [master/cn012:0:becomeActiveMaster-EventThread] zookeeper.ZKWatcher(478): replicationLogCleaner0x0, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2018-10-08 18:10:42,799 DEBUG [master/cn012:0:becomeActiveMaster-EventThread] zookeeper.ZKWatcher(543): replicationLogCleaner-0x16654dfacc40004 connected 2018-10-08 18:10:42,841 DEBUG [master/cn012:0:becomeActiveMaster] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a7cec9a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:10:42,841 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.backup.master.BackupLogCleaner 2018-10-08 18:10:42,842 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2018-10-08 18:10:42,843 INFO [master/cn012:0:becomeActiveMaster] cleaner.LogCleaner(155): Creating OldWALs cleaners with size=2 2018-10-08 18:10:42,851 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2018-10-08 18:10:42,853 INFO [master/cn012:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x21347a00 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:10:42,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2018-10-08 18:10:42,878 DEBUG [master/cn012:0:becomeActiveMaster] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a0cdca2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:10:42,878 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.backup.BackupHFileCleaner 2018-10-08 18:10:42,881 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2018-10-08 18:10:42,883 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.CleanerChore(251): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2018-10-08 18:10:42,885 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.HFileCleaner(225): Starting for large file=Thread[master/cn012:0:becomeActiveMaster-HFileCleaner.large.0-1539022242885,5,FailOnTimeoutGroup] 2018-10-08 18:10:42,886 DEBUG [master/cn012:0:becomeActiveMaster] cleaner.HFileCleaner(240): Starting for small files=Thread[master/cn012:0:becomeActiveMaster-HFileCleaner.small.0-1539022242885,5,FailOnTimeoutGroup] 2018-10-08 18:10:42,950 INFO [PEWorker-2] procedure.MasterProcedureScheduler(689): pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN checking lock on 1588230740 2018-10-08 18:10:43,020 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(160): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; rit=OFFLINE, location=null; forceNewPlan=false, retain=false 2018-10-08 18:10:43,171 WARN [master/cn012:0] assignment.AssignmentManager(1718): No servers available; cannot place 1 unassigned regions. 2018-10-08 18:10:44,177 WARN [master/cn012:0] assignment.AssignmentManager(1718): No servers available; cannot place 1 unassigned regions. 2018-10-08 18:10:45,178 WARN [master/cn012:0] assignment.AssignmentManager(1718): No servers available; cannot place 1 unassigned regions. 2018-10-08 18:10:45,765 INFO [RS:0;cn012:37486] regionserver.HRegionServer(2613): reportForDuty to master=cn012.l42scl.hortonworks.com,42545,1539022237747 with port=37486, startcode=1539022239614 2018-10-08 18:10:45,778 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.ServerManager(439): Registering regionserver=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:45,792 DEBUG [RS:0;cn012:37486] regionserver.HRegionServer(1506): Config from master: hbase.rootdir=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:10:45,793 DEBUG [RS:0;cn012:37486] regionserver.HRegionServer(1506): Config from master: fs.defaultFS=hdfs://localhost:41712 2018-10-08 18:10:45,793 DEBUG [RS:0;cn012:37486] regionserver.HRegionServer(1506): Config from master: hbase.master.info.port=36122 2018-10-08 18:10:45,851 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2018-10-08 18:10:45,852 DEBUG [RS:0;cn012:37486] zookeeper.ZKUtil(355): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/rs/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:45,852 WARN [RS:0;cn012:37486] hbase.ZNodeClearer(63): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2018-10-08 18:10:45,856 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node created, adding [cn012.l42scl.hortonworks.com,37486,1539022239614] 2018-10-08 18:10:45,927 DEBUG [RS:0;cn012:37486] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(290): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop 2.8+ java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper27(FanOutOneBlockAsyncDFSOutputSaslHelper.java:229) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:288) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:300) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:129) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:136) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:199) at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1814) at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1528) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:157) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:958) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:184) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:130) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:341) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:165) at java.lang.Thread.run(Thread.java:748) 2018-10-08 18:10:45,931 INFO [RS:0;cn012:37486] wal.WALFactory(157): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-10-08 18:10:45,932 DEBUG [master/cn012:0] assignment.AssignmentManager(1739): Processing assignQueue; systemServersCount=1, allServersCount=1 2018-10-08 18:10:45,934 INFO [PEWorker-13] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:45,941 DEBUG [RS:0;cn012:37486] regionserver.HRegionServer(1821): logDir=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:45,984 DEBUG [PEWorker-13] zookeeper.MetaTableLocator(466): META region location doesn't exist, create it 2018-10-08 18:10:46,048 DEBUG [RS:0;cn012:37486] zookeeper.ZKUtil(355): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/rs/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:46,053 INFO [PEWorker-13] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:10:46,061 DEBUG [RS:0;cn012:37486] regionserver.Replication(144): Replication stats-in-log period=300 seconds 2018-10-08 18:10:46,079 INFO [RS:0;cn012:37486] regionserver.MetricsRegionServerWrapperImpl(147): Computing regionserver metrics every 5000 milliseconds 2018-10-08 18:10:46,108 INFO [RS:0;cn012:37486] regionserver.MemStoreFlusher(133): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2018-10-08 18:10:46,117 INFO [RS:0;cn012:37486] throttle.PressureAwareCompactionThroughputController(134): Compaction throughput configurations, higher bound: 20.00 MB/second, lower bound 10.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2018-10-08 18:10:46,119 INFO [RS:0;cn012:37486] regionserver.HRegionServer$CompactionChecker(1707): CompactionChecker runs every PT10S 2018-10-08 18:10:46,135 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_OPEN_REGION-regionserver/cn012:0, corePoolSize=3, maxPoolSize=3 2018-10-08 18:10:46,136 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_OPEN_META-regionserver/cn012:0, corePoolSize=1, maxPoolSize=1 2018-10-08 18:10:46,136 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/cn012:0, corePoolSize=3, maxPoolSize=3 2018-10-08 18:10:46,136 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_CLOSE_REGION-regionserver/cn012:0, corePoolSize=3, maxPoolSize=3 2018-10-08 18:10:46,136 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_CLOSE_META-regionserver/cn012:0, corePoolSize=1, maxPoolSize=1 2018-10-08 18:10:46,137 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/cn012:0, corePoolSize=2, maxPoolSize=2 2018-10-08 18:10:46,137 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/cn012:0, corePoolSize=10, maxPoolSize=10 2018-10-08 18:10:46,137 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/cn012:0, corePoolSize=3, maxPoolSize=3 2018-10-08 18:10:46,137 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_REFRESH_PEER-regionserver/cn012:0, corePoolSize=2, maxPoolSize=2 2018-10-08 18:10:46,137 DEBUG [RS:0;cn012:37486] executor.ExecutorService(98): Starting executor service name=RS_REPLAY_SYNC_REPLICATION_WAL-regionserver/cn012:0, corePoolSize=1, maxPoolSize=1 2018-10-08 18:10:46,159 INFO [SplitLogWorker-cn012:37486] regionserver.SplitLogWorker(211): SplitLogWorker cn012.l42scl.hortonworks.com,37486,1539022239614 starting 2018-10-08 18:10:46,164 INFO [RS:0;cn012:37486] regionserver.HeapMemoryManager(210): Starting, tuneOn=false 2018-10-08 18:10:46,195 INFO [RS:0;cn012:37486] regionserver.HRegionServer(1547): Serving as cn012.l42scl.hortonworks.com,37486,1539022239614, RpcServer on cn012.l42scl.hortonworks.com/172.18.128.12:37486, sessionid=0x16654dfacc40001 2018-10-08 18:10:46,195 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc starting 2018-10-08 18:10:46,195 DEBUG [RS:0;cn012:37486] flush.RegionServerFlushTableProcedureManager(104): Start region server flush procedure manager cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:46,196 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'cn012.l42scl.hortonworks.com,37486,1539022239614' 2018-10-08 18:10:46,196 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2018-10-08 18:10:46,197 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2018-10-08 18:10:46,198 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(55): Procedure flush-table-proc started 2018-10-08 18:10:46,198 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(53): Procedure backup-proc starting 2018-10-08 18:10:46,198 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'cn012.l42scl.hortonworks.com,37486,1539022239614' 2018-10-08 18:10:46,198 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2018-10-08 18:10:46,200 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2018-10-08 18:10:46,201 INFO [RS:0;cn012:37486] regionserver.LogRollRegionServerProcedureManager(94): Started region server backup manager. 2018-10-08 18:10:46,201 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(55): Procedure backup-proc started 2018-10-08 18:10:46,201 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot starting 2018-10-08 18:10:46,201 DEBUG [RS:0;cn012:37486] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:46,201 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'cn012.l42scl.hortonworks.com,37486,1539022239614' 2018-10-08 18:10:46,201 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2018-10-08 18:10:46,202 DEBUG [RS:0;cn012:37486] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2018-10-08 18:10:46,203 DEBUG [RS:0;cn012:37486] procedure.RegionServerProcedureManagerHost(55): Procedure online-snapshot started 2018-10-08 18:10:46,203 INFO [RS:0;cn012:37486] quotas.RegionServerRpcQuotaManager(62): Quota support disabled 2018-10-08 18:10:46,204 INFO [RS:0;cn012:37486] quotas.RegionServerSpaceQuotaManager(84): Quota support disabled, not starting space quota manager. 2018-10-08 18:10:46,506 DEBUG [RSProcedureDispatcher-pool3-t1] master.ServerManager(746): New admin connection to cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:46,518 INFO [RS-EventLoopGroup-3-4] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:57622, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=AdminService 2018-10-08 18:10:46,525 INFO [RS_CLOSE_META-regionserver/cn012:0-0] handler.AssignRegionHandler(101): Open hbase:meta,,1.1588230740 2018-10-08 18:10:46,526 INFO [RS_CLOSE_META-regionserver/cn012:0-0] wal.WALFactory(157): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2018-10-08 18:10:46,546 INFO [RS_CLOSE_META-regionserver/cn012:0-0] wal.AbstractFSWAL(419): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta, suffix=.meta, logDir=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614, archiveDir=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/oldWALs 2018-10-08 18:10:46,577 DEBUG [RS-EventLoopGroup-3-5] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK] 2018-10-08 18:10:46,605 INFO [RS_CLOSE_META-regionserver/cn012:0-0] wal.AbstractFSWAL(684): New WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022246561.meta 2018-10-08 18:10:46,605 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] wal.AbstractFSWAL(773): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK]] 2018-10-08 18:10:46,606 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(7217): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:10:46,645 INFO [RS_CLOSE_META-regionserver/cn012:0-0] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:10:46,648 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] coprocessor.CoprocessorHost(200): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2018-10-08 18:10:46,667 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(8196): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2018-10-08 18:10:46,667 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.RegionCoprocessorHost(394): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2018-10-08 18:10:46,675 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table meta 1588230740 2018-10-08 18:10:46,675 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(836): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:10:46,676 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(7256): checking encryption for 1588230740 2018-10-08 18:10:46,678 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(7261): checking classloading for 1588230740 2018-10-08 18:10:46,685 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/info 2018-10-08 18:10:46,686 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/info 2018-10-08 18:10:46,687 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:46,688 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:10:46,693 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:10:46,696 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/rep_barrier 2018-10-08 18:10:46,696 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/rep_barrier 2018-10-08 18:10:46,697 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:46,698 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:10:46,699 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:10:46,701 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/table 2018-10-08 18:10:46,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/table 2018-10-08 18:10:46,703 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(239): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:46,704 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:10:46,705 INFO [StoreOpener-1588230740-1] regionserver.HStore(327): Store=table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:10:46,705 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(949): replaying wal for 1588230740 2018-10-08 18:10:46,708 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740 2018-10-08 18:10:46,712 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase/meta/1588230740 2018-10-08 18:10:46,712 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(957): stopping wal replay for 1588230740 2018-10-08 18:10:46,713 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(969): Cleaning up temporary data for 1588230740 2018-10-08 18:10:46,714 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(980): Cleaning up detritus for 1588230740 2018-10-08 18:10:46,717 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7M)) instead. 2018-10-08 18:10:46,719 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(1005): writing seq id for 1588230740 2018-10-08 18:10:46,721 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(1009): Opened 1588230740; next sequenceid=2 2018-10-08 18:10:46,721 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(1016): Running coprocessor post-open hooks for 1588230740 2018-10-08 18:10:46,785 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegionServer(2198): Post open deploy tasks for hbase:meta,,1.1588230740 2018-10-08 18:10:46,820 INFO [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=42545] zookeeper.MetaTableLocator(452): Setting hbase:meta (replicaId=0) location in ZooKeeper as cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:46,851 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/meta-region-server 2018-10-08 18:10:46,856 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegionServer(2222): Finished post open deploy task for hbase:meta,,1.1588230740 2018-10-08 18:10:46,857 INFO [RS_CLOSE_META-regionserver/cn012:0-0] handler.AssignRegionHandler(138): Opened hbase:meta,,1.1588230740 2018-10-08 18:10:47,391 INFO [PEWorker-14] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; resume parent processing. 2018-10-08 18:10:47,392 INFO [PEWorker-14] procedure2.ProcedureExecutor(1507): Finished pid=3, ppid=2, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 916msec 2018-10-08 18:10:47,669 INFO [PEWorker-15] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=1, state=RUNNABLE, hasLock=false; InitMetaProcedure table=hbase:meta; resume parent processing. 2018-10-08 18:10:47,669 INFO [PEWorker-15] procedure2.ProcedureExecutor(1507): Finished pid=2, ppid=1, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 4.5240sec 2018-10-08 18:10:47,924 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(135): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:10:47,925 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(139): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.backup.BackupObserver Metrics about HBase RegionObservers 2018-10-08 18:10:47,926 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(135): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-10-08 18:10:47,926 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(139): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2018-10-08 18:10:47,976 INFO [PEWorker-4] procedure2.ProcedureExecutor(1507): Finished pid=1, state=SUCCESS, hasLock=false; InitMetaProcedure table=hbase:meta in 5.4610sec 2018-10-08 18:10:47,976 INFO [master/cn012:0:becomeActiveMaster] master.HMaster(1047): Master startup: status=Wait for region servers to report in, state=RUNNING, startTime=1539022239888, completionTime=-1 2018-10-08 18:10:47,976 INFO [master/cn012:0:becomeActiveMaster] master.ServerManager(854): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2018-10-08 18:10:47,977 DEBUG [master/cn012:0:becomeActiveMaster] assignment.AssignmentManager(1214): Joining cluster... 2018-10-08 18:10:48,062 INFO [RS-EventLoopGroup-3-8] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:57644, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:10:48,115 INFO [master/cn012:0:becomeActiveMaster] assignment.AssignmentManager(1226): Number of RegionServers=1 2018-10-08 18:10:48,117 INFO [master/cn012:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(82): ADDED pid=-1, state=WAITING_TIMEOUT, hasLock=false; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1539022308117 2018-10-08 18:10:48,118 INFO [master/cn012:0:becomeActiveMaster] assignment.AssignmentManager(1234): Joined the cluster in 141msec 2018-10-08 18:10:48,286 INFO [master/cn012:0:becomeActiveMaster] master.TableNamespaceManager(96): Namespace table not found. Creating... 2018-10-08 18:10:48,292 INFO [master/cn012:0:becomeActiveMaster] master.HMaster(2040): Client=null/null create 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} 2018-10-08 18:10:48,552 DEBUG [master/cn012:0:becomeActiveMaster] procedure2.ProcedureExecutor(1124): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure table=hbase:namespace 2018-10-08 18:10:48,854 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(320): Archiving region hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. from FS 2018-10-08 18:10:48,857 DEBUG [PEWorker-5] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:10:48,860 DEBUG [PEWorker-5] backup.HFileArchiver(146): Directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d empty. 2018-10-08 18:10:48,862 DEBUG [PEWorker-5] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:48,862 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(324): Table 'hbase:namespace' archived! 2018-10-08 18:10:48,904 DEBUG [PEWorker-5] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2018-10-08 18:10:48,909 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(7043): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', VERSIONS => '10', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'true', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp Table name == hbase:namespace 2018-10-08 18:10:48,931 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(836): Instantiated hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:10:48,932 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1554): Closing 59e0b46d9fd65e74c2c583b12693382d, disabling compactions & flushes 2018-10-08 18:10:48,932 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1594): Updates disabled for region hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:10:48,932 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1711): Closed hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:10:49,062 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":1539022249045},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022249045}]},"ts":1539022249045} 2018-10-08 18:10:49,131 INFO [PEWorker-5] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:10:49,208 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022249199}]},"ts":1539022249199} 2018-10-08 18:10:49,216 INFO [PEWorker-5] hbase.MetaTableAccessor(1700): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2018-10-08 18:10:49,228 INFO [RS:0;cn012:37486] wal.AbstractFSWAL(419): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=cn012.l42scl.hortonworks.com%2C37486%2C1539022239614, suffix=, logDir=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614, archiveDir=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/oldWALs 2018-10-08 18:10:49,244 DEBUG [RS-EventLoopGroup-3-9] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK] 2018-10-08 18:10:49,256 INFO [RS:0;cn012:37486] wal.AbstractFSWAL(684): New WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022249231 2018-10-08 18:10:49,257 DEBUG [RS:0;cn012:37486] wal.AbstractFSWAL(773): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK]] 2018-10-08 18:10:49,332 INFO [PEWorker-5] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=hbase:namespace, region=59e0b46d9fd65e74c2c583b12693382d, ASSIGN}] 2018-10-08 18:10:49,430 INFO [PEWorker-6] procedure.MasterProcedureScheduler(689): pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=hbase:namespace, region=59e0b46d9fd65e74c2c583b12693382d, ASSIGN checking lock on 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:49,514 INFO [PEWorker-6] assignment.TransitRegionStateProcedure(160): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=hbase:namespace, region=59e0b46d9fd65e74c2c583b12693382d, ASSIGN; rit=OFFLINE, location=cn012.l42scl.hortonworks.com,37486,1539022239614; forceNewPlan=false, retain=false 2018-10-08 18:10:49,697 INFO [PEWorker-7] assignment.RegionStateStore(200): pid=5 updating hbase:meta row=59e0b46d9fd65e74c2c583b12693382d, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:49,710 INFO [PEWorker-7] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:10:50,180 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_1184345299_22 at /127.0.0.1:38268 [Receiving block BP-827454334-172.18.128.12-1539022232083:blk_1073741829_1005]] datanode.BlockReceiver(440): Slow flushOrSync took 467ms (threshold=300ms), isSync:true, flushTotalNanos=228517ns, volume=file:/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data1/, blockId=1073741829 2018-10-08 18:10:50,493 INFO [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(101): Open hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:10:50,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(7217): Opening region: {ENCODED => 59e0b46d9fd65e74c2c583b12693382d, NAME => 'hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:10:50,498 INFO [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:10:50,499 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table namespace 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(836): Instantiated hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:10:50,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(7256): checking encryption for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(7261): checking classloading for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,513 DEBUG [StoreOpener-59e0b46d9fd65e74c2c583b12693382d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/info 2018-10-08 18:10:50,514 DEBUG [StoreOpener-59e0b46d9fd65e74c2c583b12693382d-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/info 2018-10-08 18:10:50,516 INFO [StoreOpener-59e0b46d9fd65e74c2c583b12693382d-1] hfile.CacheConfig(239): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:10:50,517 INFO [StoreOpener-59e0b46d9fd65e74c2c583b12693382d-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:10:50,519 INFO [StoreOpener-59e0b46d9fd65e74c2c583b12693382d-1] regionserver.HStore(327): Store=info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:10:50,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(949): replaying wal for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(957): stopping wal replay for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(969): Cleaning up temporary data for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,527 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(980): Cleaning up detritus for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(1005): writing seq id for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,540 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:10:50,540 INFO [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(1009): Opened 59e0b46d9fd65e74c2c583b12693382d; next sequenceid=2 2018-10-08 18:10:50,540 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegion(1016): Running coprocessor post-open hooks for 59e0b46d9fd65e74c2c583b12693382d 2018-10-08 18:10:50,547 INFO [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2198): Post open deploy tasks for hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:10:50,560 INFO [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=42545] assignment.RegionStateStore(200): pid=5 updating hbase:meta row=59e0b46d9fd65e74c2c583b12693382d, regionState=OPEN, openSeqNum=2, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:10:50,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2222): Finished post open deploy task for hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:10:50,576 INFO [RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(138): Opened hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:10:51,072 INFO [PEWorker-8] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=hbase:namespace, region=59e0b46d9fd65e74c2c583b12693382d, ASSIGN; resume parent processing. 2018-10-08 18:10:51,072 INFO [PEWorker-8] procedure2.ProcedureExecutor(1507): Finished pid=6, ppid=5, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 1.0150sec 2018-10-08 18:10:51,456 INFO [PEWorker-9] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, hasLock=false; CreateTableProcedure table=hbase:namespace; resume parent processing. 2018-10-08 18:10:51,456 INFO [PEWorker-9] procedure2.ProcedureExecutor(1507): Finished pid=5, ppid=4, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=hbase:namespace, region=59e0b46d9fd65e74c2c583b12693382d, ASSIGN in 1.7400sec 2018-10-08 18:10:51,552 DEBUG [PEWorker-10] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022251552}]},"ts":1539022251552} 2018-10-08 18:10:51,563 INFO [PEWorker-10] hbase.MetaTableAccessor(1700): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2018-10-08 18:10:51,605 DEBUG [master/cn012:0:becomeActiveMaster] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/namespace 2018-10-08 18:10:51,660 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/namespace 2018-10-08 18:10:51,972 DEBUG [master/cn012:0:becomeActiveMaster] procedure2.ProcedureExecutor(1124): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE, hasLock=false; CreateNamespaceProcedure, namespace=default 2018-10-08 18:10:52,119 INFO [PEWorker-10] procedure2.ProcedureExecutor(1507): Finished pid=4, state=SUCCESS, hasLock=false; CreateTableProcedure table=hbase:namespace in 3.4150sec 2018-10-08 18:10:52,202 WARN [HBase-Metrics2-1] impl.MetricsConfig(134): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2018-10-08 18:10:52,834 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2018-10-08 18:10:53,409 INFO [PEWorker-10] procedure2.ProcedureExecutor(1507): Finished pid=7, state=SUCCESS, hasLock=false; CreateNamespaceProcedure, namespace=default in 1.3240sec 2018-10-08 18:10:53,614 DEBUG [master/cn012:0:becomeActiveMaster] procedure2.ProcedureExecutor(1124): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE, hasLock=false; CreateNamespaceProcedure, namespace=hbase 2018-10-08 18:10:54,159 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2018-10-08 18:10:54,435 INFO [PEWorker-12] procedure2.ProcedureExecutor(1507): Finished pid=8, state=SUCCESS, hasLock=false; CreateNamespaceProcedure, namespace=hbase in 849msec 2018-10-08 18:10:54,543 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/default 2018-10-08 18:10:54,626 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/hbase 2018-10-08 18:10:54,627 INFO [master/cn012:0:becomeActiveMaster] master.HMaster(1110): Master has completed initialization 14.698sec 2018-10-08 18:10:54,633 INFO [master/cn012:0:becomeActiveMaster] quotas.MasterQuotaManager(90): Quota support disabled 2018-10-08 18:10:54,633 INFO [master/cn012:0:becomeActiveMaster] zookeeper.ZKWatcher(205): not a secure deployment, proceeding 2018-10-08 18:10:54,649 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x41064d23 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:10:54,668 DEBUG [master/cn012:0:becomeActiveMaster] master.HMaster(1168): Balancer post startup initialization complete, took 0 seconds 2018-10-08 18:10:54,678 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@605673fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:10:54,722 INFO [RS-EventLoopGroup-3-11] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:57688, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:10:54,740 INFO [Time-limited test] hbase.HBaseTestingUtility(1102): Minicluster is up; activeMaster=cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:10:54,740 INFO [Time-limited test] hbase.HBaseTestingUtility(2680): Starting mini mapreduce cluster... 2018-10-08 18:10:54,740 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting test.cache.data to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cache_data in system properties and HBase conf 2018-10-08 18:10:54,741 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting hadoop.tmp.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_tmp in system properties and HBase conf 2018-10-08 18:10:54,741 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting hadoop.log.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs in system properties and HBase conf 2018-10-08 18:10:54,741 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting mapreduce.cluster.local.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_local in system properties and HBase conf 2018-10-08 18:10:54,741 INFO [Time-limited test] hbase.HBaseTestingUtility(752): Setting mapreduce.cluster.temp.dir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_temp in system properties and HBase conf 2018-10-08 18:10:54,741 INFO [Time-limited test] hbase.HBaseTestingUtility(743): read short circuit is OFF 2018-10-08 18:10:57,277 INFO [Thread-262] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:10:57,279 INFO [Thread-262] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@6b11374e{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:10:57,280 INFO [Thread-262] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@7941e491{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,AVAILABLE} Oct 08, 2018 6:10:57 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices as a root resource class Oct 08, 2018 6:10:57 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.hs.webapp.JAXBContextResolver as a provider class Oct 08, 2018 6:10:57 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Oct 08, 2018 6:10:57 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Oct 08, 2018 6:10:57 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.hs.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:10:58 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:10:58 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices to GuiceManagedComponentProvider with the scope "PerRequest" 2018-10-08 18:10:58,636 INFO [Thread-262] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@be4b506{/,file:///tmp/jetty-cn012.l42scl.hortonworks.com-36633-jobhistory-_-any-1331701933170655509.dir/webapp/,AVAILABLE}{/jobhistory} 2018-10-08 18:10:58,638 INFO [Thread-262] server.AbstractConnector(278): Started ServerConnector@4d733c6a{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:36633} 2018-10-08 18:10:58,638 INFO [Thread-262] server.Server(414): Started @29033ms Oct 08, 2018 6:11:00 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2018-10-08 18:11:00,060 INFO [Time-limited test] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:11:00,078 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@2ca64751{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:11:00,079 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@541054aa{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,AVAILABLE} Oct 08, 2018 6:11:00 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class Oct 08, 2018 6:11:00 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class Oct 08, 2018 6:11:00 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Oct 08, 2018 6:11:00 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Oct 08, 2018 6:11:00 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:11:00 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:11:00 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2018-10-08 18:11:00,763 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@315aebbd{/,file:///tmp/jetty-cn012.l42scl.hortonworks.com-33104-cluster-_-any-8641905118703173742.dir/webapp/,AVAILABLE}{/cluster} 2018-10-08 18:11:00,765 INFO [Time-limited test] server.AbstractConnector(278): Started ServerConnector@394100dd{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:33104} 2018-10-08 18:11:00,765 INFO [Time-limited test] server.Server(414): Started @31160ms 2018-10-08 18:11:01,217 WARN [Time-limited test] tracker.NMLogAggregationStatusTracker(96): Log Aggregation is disabled.So is the LogAggregationStatusTracker. Oct 08, 2018 6:11:01 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2018-10-08 18:11:01,259 INFO [Time-limited test] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:11:01,266 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@55391eaf{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:11:01,266 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@82dd238{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,AVAILABLE} Oct 08, 2018 6:11:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Oct 08, 2018 6:11:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Oct 08, 2018 6:11:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Oct 08, 2018 6:11:01 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Oct 08, 2018 6:11:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:11:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:11:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2018-10-08 18:11:01,595 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@19fd8a37{/,file:///tmp/jetty-cn012.l42scl.hortonworks.com-43378-node-_-any-2898110005109034533.dir/webapp/,AVAILABLE}{/node} 2018-10-08 18:11:01,598 INFO [Time-limited test] server.AbstractConnector(278): Started ServerConnector@54512ed9{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:43378} 2018-10-08 18:11:01,599 INFO [Time-limited test] server.Server(414): Started @31993ms 2018-10-08 18:11:01,885 WARN [Time-limited test] tracker.NMLogAggregationStatusTracker(96): Log Aggregation is disabled.So is the LogAggregationStatusTracker. Oct 08, 2018 6:11:01 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. 2018-10-08 18:11:01,908 INFO [Time-limited test] server.Server(346): jetty-9.3.19.v20170502 2018-10-08 18:11:01,910 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@7d0f0186{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,AVAILABLE} 2018-10-08 18:11:01,910 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.s.ServletContextHandler@4092dae5{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,AVAILABLE} Oct 08, 2018 6:11:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Oct 08, 2018 6:11:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Oct 08, 2018 6:11:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Oct 08, 2018 6:11:02 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Oct 08, 2018 6:11:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:11:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Oct 08, 2018 6:11:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2018-10-08 18:11:02,224 INFO [Time-limited test] handler.ContextHandler(781): Started o.e.j.w.WebAppContext@5fb77b78{/,file:///tmp/jetty-cn012.l42scl.hortonworks.com-42937-node-_-any-3382806847668530074.dir/webapp/,AVAILABLE}{/node} 2018-10-08 18:11:02,226 INFO [Time-limited test] server.AbstractConnector(278): Started ServerConnector@4c2e90f3{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:42937} 2018-10-08 18:11:02,227 INFO [Time-limited test] server.Server(414): Started @32621ms 2018-10-08 18:11:02,248 INFO [Time-limited test] hbase.HBaseTestingUtility(2708): Mini mapreduce cluster started 2018-10-08 18:11:02,249 INFO [Time-limited test] backup.TestBackupBase(315): ROOTDIR hdfs://localhost:41712/backupUT 2018-10-08 18:11:02,277 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:42454, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=MasterService 2018-10-08 18:11:02,292 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$16(3223): Client=hbase//172.18.128.12 creating {NAME => 'ns1'} 2018-10-08 18:11:02,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE, hasLock=false; CreateNamespaceProcedure, namespace=ns1 2018-10-08 18:11:02,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=9 2018-10-08 18:11:02,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=9 2018-10-08 18:11:02,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=9 2018-10-08 18:11:03,017 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2018-10-08 18:11:03,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=9 2018-10-08 18:11:03,406 INFO [PEWorker-1] procedure2.ProcedureExecutor(1507): Finished pid=9, state=SUCCESS, hasLock=false; CreateNamespaceProcedure, namespace=ns1 in 869msec 2018-10-08 18:11:03,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=9 2018-10-08 18:11:03,769 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$16(3223): Client=hbase//172.18.128.12 creating {NAME => 'ns2'} 2018-10-08 18:11:03,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE, hasLock=false; CreateNamespaceProcedure, namespace=ns2 2018-10-08 18:11:04,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=10 2018-10-08 18:11:04,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=10 2018-10-08 18:11:04,367 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2018-10-08 18:11:04,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=10 2018-10-08 18:11:04,691 INFO [PEWorker-2] procedure2.ProcedureExecutor(1507): Finished pid=10, state=SUCCESS, hasLock=false; CreateNamespaceProcedure, namespace=ns2 in 720msec 2018-10-08 18:11:04,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=10 2018-10-08 18:11:04,726 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$16(3223): Client=hbase//172.18.128.12 creating {NAME => 'ns3'} 2018-10-08 18:11:04,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE, hasLock=false; CreateNamespaceProcedure, namespace=ns3 2018-10-08 18:11:05,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=11 2018-10-08 18:11:05,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=11 2018-10-08 18:11:05,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=11 2018-10-08 18:11:05,480 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2018-10-08 18:11:05,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=11 2018-10-08 18:11:05,919 INFO [PEWorker-13] procedure2.ProcedureExecutor(1507): Finished pid=11, state=SUCCESS, hasLock=false; CreateNamespaceProcedure, namespace=ns3 in 939msec 2018-10-08 18:11:06,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=11 2018-10-08 18:11:06,185 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$16(3223): Client=hbase//172.18.128.12 creating {NAME => 'ns4'} 2018-10-08 18:11:06,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=12, state=RUNNABLE:CREATE_NAMESPACE_PREPARE, hasLock=false; CreateNamespaceProcedure, namespace=ns4 2018-10-08 18:11:06,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=12 2018-10-08 18:11:06,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=12 2018-10-08 18:11:06,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=12 2018-10-08 18:11:06,869 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2018-10-08 18:11:07,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=12 2018-10-08 18:11:07,232 INFO [PEWorker-3] procedure2.ProcedureExecutor(1507): Finished pid=12, state=SUCCESS, hasLock=false; CreateNamespaceProcedure, namespace=ns4 in 821msec 2018-10-08 18:11:07,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=12 2018-10-08 18:11:07,647 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$3(2004): Client=hbase//172.18.128.12 create 'test-1539022262249', {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-10-08 18:11:07,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=13, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure table=test-1539022262249 2018-10-08 18:11:08,064 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(630): Client=hbase//172.18.128.12 procedure request for creating table: namespace: "default" qualifier: "test-1539022262249" procId is: 13 2018-10-08 18:11:08,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=13 2018-10-08 18:11:08,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=13 2018-10-08 18:11:08,183 DEBUG [PEWorker-14] procedure.DeleteTableProcedure(320): Archiving region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. from FS 2018-10-08 18:11:08,188 DEBUG [PEWorker-14] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:11:08,191 DEBUG [PEWorker-14] backup.HFileArchiver(146): Directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a empty. 2018-10-08 18:11:08,192 DEBUG [PEWorker-14] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:08,192 DEBUG [PEWorker-14] procedure.DeleteTableProcedure(324): Table 'test-1539022262249' archived! 2018-10-08 18:11:08,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=13 2018-10-08 18:11:08,645 DEBUG [PEWorker-14] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/default/test-1539022262249/.tabledesc/.tableinfo.0000000001 2018-10-08 18:11:08,650 INFO [RegionOpenAndInitThread-test-1539022262249-1] regionserver.HRegion(7043): creating HRegion test-1539022262249 HTD == 'test-1539022262249', {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp Table name == test-1539022262249 2018-10-08 18:11:08,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=13 2018-10-08 18:11:09,086 DEBUG [RegionOpenAndInitThread-test-1539022262249-1] regionserver.HRegion(836): Instantiated test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:09,089 DEBUG [RegionOpenAndInitThread-test-1539022262249-1] regionserver.HRegion(1554): Closing be1bf5445faddb63e45726410a07917a, disabling compactions & flushes 2018-10-08 18:11:09,089 DEBUG [RegionOpenAndInitThread-test-1539022262249-1] regionserver.HRegion(1594): Updates disabled for region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:09,089 INFO [RegionOpenAndInitThread-test-1539022262249-1] regionserver.HRegion(1711): Closed test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:09,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=13 2018-10-08 18:11:09,225 DEBUG [PEWorker-14] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a.","families":{"info":[{"qualifier":"regioninfo","vlen":52,"tag":[],"timestamp":1539022269225},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022269225}]},"ts":1539022269225} 2018-10-08 18:11:09,233 INFO [PEWorker-14] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:11:09,307 DEBUG [PEWorker-14] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"test-1539022262249","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022269306}]},"ts":1539022269306} 2018-10-08 18:11:09,313 INFO [PEWorker-14] hbase.MetaTableAccessor(1700): Updated tableName=test-1539022262249, state=ENABLING in hbase:meta 2018-10-08 18:11:09,387 INFO [PEWorker-14] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=test-1539022262249, region=be1bf5445faddb63e45726410a07917a, ASSIGN}] 2018-10-08 18:11:09,470 INFO [PEWorker-15] procedure.MasterProcedureScheduler(689): pid=14, ppid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=test-1539022262249, region=be1bf5445faddb63e45726410a07917a, ASSIGN checking lock on be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:09,588 INFO [PEWorker-15] assignment.TransitRegionStateProcedure(160): Starting pid=14, ppid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=test-1539022262249, region=be1bf5445faddb63e45726410a07917a, ASSIGN; rit=OFFLINE, location=cn012.l42scl.hortonworks.com,37486,1539022239614; forceNewPlan=false, retain=false 2018-10-08 18:11:09,744 INFO [PEWorker-4] assignment.RegionStateStore(200): pid=14 updating hbase:meta row=be1bf5445faddb63e45726410a07917a, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:09,750 INFO [PEWorker-4] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=15, ppid=14, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:11:10,125 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(101): Open test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:10,125 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7217): Opening region: {ENCODED => be1bf5445faddb63e45726410a07917a, NAME => 'test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:11:10,126 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:11:10,127 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table test-1539022262249 be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,128 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(836): Instantiated test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:10,129 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7256): checking encryption for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,129 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7261): checking classloading for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,137 DEBUG [StoreOpener-be1bf5445faddb63e45726410a07917a-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f 2018-10-08 18:11:10,137 DEBUG [StoreOpener-be1bf5445faddb63e45726410a07917a-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f 2018-10-08 18:11:10,146 INFO [StoreOpener-be1bf5445faddb63e45726410a07917a-1] hfile.CacheConfig(239): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:10,147 INFO [StoreOpener-be1bf5445faddb63e45726410a07917a-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:10,148 INFO [StoreOpener-be1bf5445faddb63e45726410a07917a-1] regionserver.HStore(327): Store=f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:10,148 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(949): replaying wal for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,152 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,153 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/default/test-1539022262249/be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,153 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(957): stopping wal replay for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,153 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(969): Cleaning up temporary data for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,155 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(980): Cleaning up detritus for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,158 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1005): writing seq id for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,184 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:11:10,186 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1009): Opened be1bf5445faddb63e45726410a07917a; next sequenceid=2 2018-10-08 18:11:10,186 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1016): Running coprocessor post-open hooks for be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:10,188 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2198): Post open deploy tasks for test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:10,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=13 2018-10-08 18:11:10,199 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=14 updating hbase:meta row=be1bf5445faddb63e45726410a07917a, regionState=OPEN, openSeqNum=2, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:10,205 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2222): Finished post open deploy task for test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:10,205 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(138): Opened test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:10,788 INFO [PEWorker-6] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=14, ppid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=test-1539022262249, region=be1bf5445faddb63e45726410a07917a, ASSIGN; resume parent processing. 2018-10-08 18:11:10,788 INFO [PEWorker-6] procedure2.ProcedureExecutor(1507): Finished pid=15, ppid=14, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 568msec 2018-10-08 18:11:11,156 INFO [PEWorker-16] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=13, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, hasLock=false; CreateTableProcedure table=test-1539022262249; resume parent processing. 2018-10-08 18:11:11,157 INFO [PEWorker-16] procedure2.ProcedureExecutor(1507): Finished pid=14, ppid=13, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=test-1539022262249, region=be1bf5445faddb63e45726410a07917a, ASSIGN in 1.4010sec 2018-10-08 18:11:11,233 DEBUG [PEWorker-7] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"test-1539022262249","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022271233}]},"ts":1539022271233} 2018-10-08 18:11:11,240 INFO [PEWorker-7] hbase.MetaTableAccessor(1700): Updated tableName=test-1539022262249, state=ENABLED in hbase:meta 2018-10-08 18:11:11,573 INFO [PEWorker-7] procedure2.ProcedureExecutor(1507): Finished pid=13, state=SUCCESS, hasLock=false; CreateTableProcedure table=test-1539022262249 in 3.7180sec 2018-10-08 18:11:12,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=13 2018-10-08 18:11:12,199 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: CREATE, Table Name: default:test-1539022262249, procId: 13 completed 2018-10-08 18:11:12,201 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x67056e66 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:12,237 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f7c3f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:12,262 INFO [RS-EventLoopGroup-3-12] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:57780, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:12,276 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegion(8446): writing data to region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. with WAL disabled. Data may be lost in the event of a crash. 2018-10-08 18:11:12,420 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$3(2004): Client=hbase//172.18.128.12 create 'ns2:test-15390222622491', {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-10-08 18:11:12,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure table=ns2:test-15390222622491 2018-10-08 18:11:12,766 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(630): Client=hbase//172.18.128.12 procedure request for creating table: namespace: "ns2" qualifier: "test-15390222622491" procId is: 16 2018-10-08 18:11:12,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=16 2018-10-08 18:11:12,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=16 2018-10-08 18:11:12,917 DEBUG [PEWorker-8] procedure.DeleteTableProcedure(320): Archiving region ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. from FS 2018-10-08 18:11:12,920 DEBUG [PEWorker-8] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:11:12,921 DEBUG [PEWorker-8] backup.HFileArchiver(146): Directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c empty. 2018-10-08 18:11:12,923 DEBUG [PEWorker-8] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:12,923 DEBUG [PEWorker-8] procedure.DeleteTableProcedure(324): Table 'ns2:test-15390222622491' archived! 2018-10-08 18:11:12,965 DEBUG [PEWorker-8] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns2/test-15390222622491/.tabledesc/.tableinfo.0000000001 2018-10-08 18:11:12,968 INFO [RegionOpenAndInitThread-ns2:test-15390222622491-1] regionserver.HRegion(7043): creating HRegion ns2:test-15390222622491 HTD == 'ns2:test-15390222622491', {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp Table name == ns2:test-15390222622491 2018-10-08 18:11:13,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=16 2018-10-08 18:11:13,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=16 2018-10-08 18:11:13,394 DEBUG [RegionOpenAndInitThread-ns2:test-15390222622491-1] regionserver.HRegion(836): Instantiated ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:13,396 DEBUG [RegionOpenAndInitThread-ns2:test-15390222622491-1] regionserver.HRegion(1554): Closing a5b65c0ba00fd6a2f67397f742450e8c, disabling compactions & flushes 2018-10-08 18:11:13,396 DEBUG [RegionOpenAndInitThread-ns2:test-15390222622491-1] regionserver.HRegion(1594): Updates disabled for region ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:11:13,396 INFO [RegionOpenAndInitThread-ns2:test-15390222622491-1] regionserver.HRegion(1711): Closed ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:11:13,507 DEBUG [PEWorker-8] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":1539022273506},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022273506}]},"ts":1539022273506} 2018-10-08 18:11:13,511 INFO [PEWorker-8] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:11:13,616 DEBUG [PEWorker-8] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"ns2:test-15390222622491","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022273615}]},"ts":1539022273615} 2018-10-08 18:11:13,620 INFO [PEWorker-8] hbase.MetaTableAccessor(1700): Updated tableName=ns2:test-15390222622491, state=ENABLING in hbase:meta 2018-10-08 18:11:13,652 INFO [PEWorker-8] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=ns2:test-15390222622491, region=a5b65c0ba00fd6a2f67397f742450e8c, ASSIGN}] 2018-10-08 18:11:13,723 INFO [PEWorker-9] procedure.MasterProcedureScheduler(689): pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=ns2:test-15390222622491, region=a5b65c0ba00fd6a2f67397f742450e8c, ASSIGN checking lock on a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:13,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=16 2018-10-08 18:11:14,000 INFO [PEWorker-9] assignment.TransitRegionStateProcedure(160): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=ns2:test-15390222622491, region=a5b65c0ba00fd6a2f67397f742450e8c, ASSIGN; rit=OFFLINE, location=cn012.l42scl.hortonworks.com,37486,1539022239614; forceNewPlan=false, retain=false 2018-10-08 18:11:14,154 INFO [PEWorker-11] assignment.RegionStateStore(200): pid=17 updating hbase:meta row=a5b65c0ba00fd6a2f67397f742450e8c, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:14,161 INFO [PEWorker-11] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:11:14,532 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] handler.AssignRegionHandler(101): Open ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:11:14,533 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(7217): Opening region: {ENCODED => a5b65c0ba00fd6a2f67397f742450e8c, NAME => 'ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:11:14,534 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:11:14,534 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table test-15390222622491 a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,534 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(836): Instantiated ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:14,534 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(7256): checking encryption for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,534 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(7261): checking classloading for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,542 DEBUG [StoreOpener-a5b65c0ba00fd6a2f67397f742450e8c-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/f 2018-10-08 18:11:14,542 DEBUG [StoreOpener-a5b65c0ba00fd6a2f67397f742450e8c-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/f 2018-10-08 18:11:14,543 INFO [StoreOpener-a5b65c0ba00fd6a2f67397f742450e8c-1] hfile.CacheConfig(239): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:14,543 INFO [StoreOpener-a5b65c0ba00fd6a2f67397f742450e8c-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:14,545 INFO [StoreOpener-a5b65c0ba00fd6a2f67397f742450e8c-1] regionserver.HStore(327): Store=f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:14,545 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(949): replaying wal for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,549 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,550 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,550 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(957): stopping wal replay for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,550 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(969): Cleaning up temporary data for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,551 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(980): Cleaning up detritus for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,554 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(1005): writing seq id for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,566 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:11:14,567 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(1009): Opened a5b65c0ba00fd6a2f67397f742450e8c; next sequenceid=2 2018-10-08 18:11:14,567 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(1016): Running coprocessor post-open hooks for a5b65c0ba00fd6a2f67397f742450e8c 2018-10-08 18:11:14,569 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegionServer(2198): Post open deploy tasks for ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:11:14,576 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=17 updating hbase:meta row=a5b65c0ba00fd6a2f67397f742450e8c, regionState=OPEN, openSeqNum=2, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:14,580 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegionServer(2222): Finished post open deploy task for ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:11:14,581 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] handler.AssignRegionHandler(138): Opened ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:11:14,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=16 2018-10-08 18:11:14,908 INFO [PEWorker-12] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=ns2:test-15390222622491, region=a5b65c0ba00fd6a2f67397f742450e8c, ASSIGN; resume parent processing. 2018-10-08 18:11:14,909 INFO [PEWorker-12] procedure2.ProcedureExecutor(1507): Finished pid=18, ppid=17, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 481msec 2018-10-08 18:11:15,288 INFO [PEWorker-1] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, hasLock=false; CreateTableProcedure table=ns2:test-15390222622491; resume parent processing. 2018-10-08 18:11:15,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1507): Finished pid=17, ppid=16, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=ns2:test-15390222622491, region=a5b65c0ba00fd6a2f67397f742450e8c, ASSIGN in 1.2570sec 2018-10-08 18:11:15,447 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"ns2:test-15390222622491","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022275446}]},"ts":1539022275446} 2018-10-08 18:11:15,452 INFO [PEWorker-2] hbase.MetaTableAccessor(1700): Updated tableName=ns2:test-15390222622491, state=ENABLED in hbase:meta 2018-10-08 18:11:15,672 WARN [HBase-Metrics2-1] impl.MetricsConfig(134): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2018-10-08 18:11:15,834 INFO [PEWorker-2] procedure2.ProcedureExecutor(1507): Finished pid=16, state=SUCCESS, hasLock=false; CreateTableProcedure table=ns2:test-15390222622491 in 3.1970sec 2018-10-08 18:11:16,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=16 2018-10-08 18:11:16,891 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: CREATE, Table Name: ns2:test-15390222622491, procId: 16 completed 2018-10-08 18:11:16,902 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegion(8446): writing data to region ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. with WAL disabled. Data may be lost in the event of a crash. 2018-10-08 18:11:17,024 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$3(2004): Client=hbase//172.18.128.12 create 'ns3:test-15390222622492', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-10-08 18:11:17,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure table=ns3:test-15390222622492 2018-10-08 18:11:17,376 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(630): Client=hbase//172.18.128.12 procedure request for creating table: namespace: "ns3" qualifier: "test-15390222622492" procId is: 19 2018-10-08 18:11:17,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=19 2018-10-08 18:11:17,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=19 2018-10-08 18:11:17,483 DEBUG [PEWorker-13] procedure.DeleteTableProcedure(320): Archiving region ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. from FS 2018-10-08 18:11:17,487 DEBUG [PEWorker-13] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:11:17,488 DEBUG [PEWorker-13] backup.HFileArchiver(146): Directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45 empty. 2018-10-08 18:11:17,490 DEBUG [PEWorker-13] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:17,490 DEBUG [PEWorker-13] procedure.DeleteTableProcedure(324): Table 'ns3:test-15390222622492' archived! 2018-10-08 18:11:17,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=19 2018-10-08 18:11:17,937 DEBUG [PEWorker-13] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns3/test-15390222622492/.tabledesc/.tableinfo.0000000001 2018-10-08 18:11:17,941 INFO [RegionOpenAndInitThread-ns3:test-15390222622492-1] regionserver.HRegion(7043): creating HRegion ns3:test-15390222622492 HTD == 'ns3:test-15390222622492', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp Table name == ns3:test-15390222622492 2018-10-08 18:11:17,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=19 2018-10-08 18:11:18,370 DEBUG [RegionOpenAndInitThread-ns3:test-15390222622492-1] regionserver.HRegion(836): Instantiated ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:18,372 DEBUG [RegionOpenAndInitThread-ns3:test-15390222622492-1] regionserver.HRegion(1554): Closing cea8b370d2c8987401a9e1fa10290c45, disabling compactions & flushes 2018-10-08 18:11:18,372 DEBUG [RegionOpenAndInitThread-ns3:test-15390222622492-1] regionserver.HRegion(1594): Updates disabled for region ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:11:18,372 INFO [RegionOpenAndInitThread-ns3:test-15390222622492-1] regionserver.HRegion(1711): Closed ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:11:18,479 DEBUG [PEWorker-13] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":1539022278478},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022278478}]},"ts":1539022278478} 2018-10-08 18:11:18,484 INFO [PEWorker-13] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:11:18,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=19 2018-10-08 18:11:18,588 DEBUG [PEWorker-13] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"ns3:test-15390222622492","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022278588}]},"ts":1539022278588} 2018-10-08 18:11:18,592 INFO [PEWorker-13] hbase.MetaTableAccessor(1700): Updated tableName=ns3:test-15390222622492, state=ENABLING in hbase:meta 2018-10-08 18:11:18,678 INFO [PEWorker-13] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=20, ppid=19, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=ns3:test-15390222622492, region=cea8b370d2c8987401a9e1fa10290c45, ASSIGN}] 2018-10-08 18:11:18,821 INFO [PEWorker-3] procedure.MasterProcedureScheduler(689): pid=20, ppid=19, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=ns3:test-15390222622492, region=cea8b370d2c8987401a9e1fa10290c45, ASSIGN checking lock on cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:18,945 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(160): Starting pid=20, ppid=19, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=ns3:test-15390222622492, region=cea8b370d2c8987401a9e1fa10290c45, ASSIGN; rit=OFFLINE, location=cn012.l42scl.hortonworks.com,37486,1539022239614; forceNewPlan=false, retain=false 2018-10-08 18:11:19,099 INFO [PEWorker-14] assignment.RegionStateStore(200): pid=20 updating hbase:meta row=cea8b370d2c8987401a9e1fa10290c45, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:19,106 INFO [PEWorker-14] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:11:19,444 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] handler.AssignRegionHandler(101): Open ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:11:19,445 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(7217): Opening region: {ENCODED => cea8b370d2c8987401a9e1fa10290c45, NAME => 'ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:11:19,446 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:11:19,446 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table test-15390222622492 cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,446 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(836): Instantiated ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:19,447 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(7256): checking encryption for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,447 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(7261): checking classloading for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,455 DEBUG [StoreOpener-cea8b370d2c8987401a9e1fa10290c45-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45/f 2018-10-08 18:11:19,455 DEBUG [StoreOpener-cea8b370d2c8987401a9e1fa10290c45-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45/f 2018-10-08 18:11:19,456 INFO [StoreOpener-cea8b370d2c8987401a9e1fa10290c45-1] hfile.CacheConfig(239): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:19,457 INFO [StoreOpener-cea8b370d2c8987401a9e1fa10290c45-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:19,458 INFO [StoreOpener-cea8b370d2c8987401a9e1fa10290c45-1] regionserver.HStore(327): Store=f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:19,458 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(949): replaying wal for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,464 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,464 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,465 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(957): stopping wal replay for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,465 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(969): Cleaning up temporary data for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,466 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(980): Cleaning up detritus for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,468 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(1005): writing seq id for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,481 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:11:19,481 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(1009): Opened cea8b370d2c8987401a9e1fa10290c45; next sequenceid=2 2018-10-08 18:11:19,481 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(1016): Running coprocessor post-open hooks for cea8b370d2c8987401a9e1fa10290c45 2018-10-08 18:11:19,483 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegionServer(2198): Post open deploy tasks for ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:11:19,491 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=20 updating hbase:meta row=cea8b370d2c8987401a9e1fa10290c45, regionState=OPEN, openSeqNum=2, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:19,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=19 2018-10-08 18:11:19,496 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegionServer(2222): Finished post open deploy task for ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:11:19,496 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] handler.AssignRegionHandler(138): Opened ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:11:20,117 INFO [PEWorker-5] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=20, ppid=19, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=ns3:test-15390222622492, region=cea8b370d2c8987401a9e1fa10290c45, ASSIGN; resume parent processing. 2018-10-08 18:11:20,117 INFO [PEWorker-5] procedure2.ProcedureExecutor(1507): Finished pid=21, ppid=20, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 493msec 2018-10-08 18:11:20,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=19, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, hasLock=false; CreateTableProcedure table=ns3:test-15390222622492; resume parent processing. 2018-10-08 18:11:20,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1507): Finished pid=20, ppid=19, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=ns3:test-15390222622492, region=cea8b370d2c8987401a9e1fa10290c45, ASSIGN in 1.4390sec 2018-10-08 18:11:20,563 DEBUG [PEWorker-6] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"ns3:test-15390222622492","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022280563}]},"ts":1539022280563} 2018-10-08 18:11:20,569 INFO [PEWorker-6] hbase.MetaTableAccessor(1700): Updated tableName=ns3:test-15390222622492, state=ENABLED in hbase:meta 2018-10-08 18:11:21,441 INFO [PEWorker-6] procedure2.ProcedureExecutor(1507): Finished pid=19, state=SUCCESS, hasLock=false; CreateTableProcedure table=ns3:test-15390222622492 in 4.1110sec 2018-10-08 18:11:21,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=19 2018-10-08 18:11:21,496 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: CREATE, Table Name: ns3:test-15390222622492, procId: 19 completed 2018-10-08 18:11:21,497 DEBUG [Time-limited test] hbase.HBaseTestingUtility(3452): Waiting until all regions of table ns3:test-15390222622492 get assigned. Timeout = 60000ms 2018-10-08 18:11:21,502 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-10-08 18:11:21,517 INFO [Time-limited test] hbase.HBaseTestingUtility(3504): All regions for table ns3:test-15390222622492 assigned to meta. Checking AM states. 2018-10-08 18:11:21,518 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-10-08 18:11:21,519 INFO [Time-limited test] hbase.HBaseTestingUtility(3524): All regions for table ns3:test-15390222622492 assigned. 2018-10-08 18:11:21,523 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$3(2004): Client=hbase//172.18.128.12 create 'ns4:test-15390222622493', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-10-08 18:11:21,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=22, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure table=ns4:test-15390222622493 2018-10-08 18:11:21,858 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(630): Client=hbase//172.18.128.12 procedure request for creating table: namespace: "ns4" qualifier: "test-15390222622493" procId is: 22 2018-10-08 18:11:21,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=22 2018-10-08 18:11:21,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=22 2018-10-08 18:11:22,075 DEBUG [PEWorker-16] procedure.DeleteTableProcedure(320): Archiving region ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. from FS 2018-10-08 18:11:22,078 DEBUG [PEWorker-16] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:11:22,079 DEBUG [PEWorker-16] backup.HFileArchiver(146): Directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8 empty. 2018-10-08 18:11:22,082 DEBUG [PEWorker-16] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:22,082 DEBUG [PEWorker-16] procedure.DeleteTableProcedure(324): Table 'ns4:test-15390222622493' archived! 2018-10-08 18:11:22,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=22 2018-10-08 18:11:22,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=22 2018-10-08 18:11:22,543 DEBUG [PEWorker-16] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/ns4/test-15390222622493/.tabledesc/.tableinfo.0000000001 2018-10-08 18:11:22,546 INFO [RegionOpenAndInitThread-ns4:test-15390222622493-1] regionserver.HRegion(7043): creating HRegion ns4:test-15390222622493 HTD == 'ns4:test-15390222622493', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}, {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp Table name == ns4:test-15390222622493 2018-10-08 18:11:22,587 DEBUG [RegionOpenAndInitThread-ns4:test-15390222622493-1] regionserver.HRegion(836): Instantiated ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:22,588 DEBUG [RegionOpenAndInitThread-ns4:test-15390222622493-1] regionserver.HRegion(1554): Closing 597b3222c11323d82584b9711fb2a2c8, disabling compactions & flushes 2018-10-08 18:11:22,588 DEBUG [RegionOpenAndInitThread-ns4:test-15390222622493-1] regionserver.HRegion(1594): Updates disabled for region ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:11:22,588 INFO [RegionOpenAndInitThread-ns4:test-15390222622493-1] regionserver.HRegion(1711): Closed ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:11:22,673 DEBUG [PEWorker-16] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":1539022282673},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022282673}]},"ts":1539022282673} 2018-10-08 18:11:22,683 INFO [PEWorker-16] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:11:22,757 DEBUG [PEWorker-16] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"ns4:test-15390222622493","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022282756}]},"ts":1539022282756} 2018-10-08 18:11:22,763 INFO [PEWorker-16] hbase.MetaTableAccessor(1700): Updated tableName=ns4:test-15390222622493, state=ENABLING in hbase:meta 2018-10-08 18:11:22,848 INFO [PEWorker-16] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=23, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=ns4:test-15390222622493, region=597b3222c11323d82584b9711fb2a2c8, ASSIGN}] 2018-10-08 18:11:22,972 INFO [PEWorker-7] procedure.MasterProcedureScheduler(689): pid=23, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=ns4:test-15390222622493, region=597b3222c11323d82584b9711fb2a2c8, ASSIGN checking lock on 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:22,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=22 2018-10-08 18:11:23,100 INFO [PEWorker-7] assignment.TransitRegionStateProcedure(160): Starting pid=23, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=ns4:test-15390222622493, region=597b3222c11323d82584b9711fb2a2c8, ASSIGN; rit=OFFLINE, location=cn012.l42scl.hortonworks.com,37486,1539022239614; forceNewPlan=false, retain=false 2018-10-08 18:11:23,255 INFO [PEWorker-8] assignment.RegionStateStore(200): pid=23 updating hbase:meta row=597b3222c11323d82584b9711fb2a2c8, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:23,271 INFO [PEWorker-8] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:11:23,651 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(101): Open ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:11:23,652 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7217): Opening region: {ENCODED => 597b3222c11323d82584b9711fb2a2c8, NAME => 'ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:11:23,653 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:11:23,654 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table test-15390222622493 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,654 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(836): Instantiated ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:23,654 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7256): checking encryption for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,654 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7261): checking classloading for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,666 DEBUG [StoreOpener-597b3222c11323d82584b9711fb2a2c8-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8/f 2018-10-08 18:11:23,666 DEBUG [StoreOpener-597b3222c11323d82584b9711fb2a2c8-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8/f 2018-10-08 18:11:23,667 INFO [StoreOpener-597b3222c11323d82584b9711fb2a2c8-1] hfile.CacheConfig(239): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:23,668 INFO [StoreOpener-597b3222c11323d82584b9711fb2a2c8-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:23,670 INFO [StoreOpener-597b3222c11323d82584b9711fb2a2c8-1] regionserver.HStore(327): Store=f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:23,670 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(949): replaying wal for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,676 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,677 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,677 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(957): stopping wal replay for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,677 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(969): Cleaning up temporary data for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,679 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(980): Cleaning up detritus for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,682 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1005): writing seq id for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,688 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:11:23,688 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1009): Opened 597b3222c11323d82584b9711fb2a2c8; next sequenceid=2 2018-10-08 18:11:23,688 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1016): Running coprocessor post-open hooks for 597b3222c11323d82584b9711fb2a2c8 2018-10-08 18:11:23,691 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2198): Post open deploy tasks for ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:11:23,701 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=23 updating hbase:meta row=597b3222c11323d82584b9711fb2a2c8, regionState=OPEN, openSeqNum=2, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:23,707 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2222): Finished post open deploy task for ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:11:23,707 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(138): Opened ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:11:23,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=22 2018-10-08 18:11:24,062 INFO [PEWorker-10] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=23, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=ns4:test-15390222622493, region=597b3222c11323d82584b9711fb2a2c8, ASSIGN; resume parent processing. 2018-10-08 18:11:24,062 INFO [PEWorker-10] procedure2.ProcedureExecutor(1507): Finished pid=24, ppid=23, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 550msec 2018-10-08 18:11:24,404 INFO [PEWorker-11] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=22, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, hasLock=false; CreateTableProcedure table=ns4:test-15390222622493; resume parent processing. 2018-10-08 18:11:24,404 INFO [PEWorker-11] procedure2.ProcedureExecutor(1507): Finished pid=23, ppid=22, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=ns4:test-15390222622493, region=597b3222c11323d82584b9711fb2a2c8, ASSIGN in 1.2140sec 2018-10-08 18:11:24,547 DEBUG [PEWorker-12] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"ns4:test-15390222622493","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022284546}]},"ts":1539022284546} 2018-10-08 18:11:24,554 INFO [PEWorker-12] hbase.MetaTableAccessor(1700): Updated tableName=ns4:test-15390222622493, state=ENABLED in hbase:meta 2018-10-08 18:11:24,961 WARN [HBase-Metrics2-1] impl.MetricsConfig(134): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2018-10-08 18:11:24,976 INFO [PEWorker-12] procedure2.ProcedureExecutor(1507): Finished pid=22, state=SUCCESS, hasLock=false; CreateTableProcedure table=ns4:test-15390222622493 in 3.1670sec 2018-10-08 18:11:25,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=22 2018-10-08 18:11:25,988 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: CREATE, Table Name: ns4:test-15390222622493, procId: 22 completed 2018-10-08 18:11:25,989 DEBUG [Time-limited test] hbase.HBaseTestingUtility(3452): Waiting until all regions of table ns4:test-15390222622493 get assigned. Timeout = 60000ms 2018-10-08 18:11:25,989 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-10-08 18:11:26,001 INFO [Time-limited test] hbase.HBaseTestingUtility(3504): All regions for table ns4:test-15390222622493 assigned to meta. Checking AM states. 2018-10-08 18:11:26,001 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2018-10-08 18:11:26,002 INFO [Time-limited test] hbase.HBaseTestingUtility(3524): All regions for table ns4:test-15390222622493 assigned. 2018-10-08 18:11:26,002 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x67056e66 to localhost:54078 2018-10-08 18:11:26,003 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:26,008 INFO [Time-limited test] backup.TestIncrementalBackupWithBulkLoad(69): create full backup image for all tables 2018-10-08 18:11:26,010 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x61915206 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:26,137 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8a2558a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:26,169 INFO [RS-EventLoopGroup-3-14] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:57878, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:26,208 INFO [RS-EventLoopGroup-1-4] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:42616, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=MasterService 2018-10-08 18:11:26,223 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$16(3223): Client=hbase//172.18.128.12 creating {NAME => 'backup'} 2018-10-08 18:11:26,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=25, state=RUNNABLE:CREATE_NAMESPACE_PREPARE, hasLock=false; CreateNamespaceProcedure, namespace=backup 2018-10-08 18:11:26,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=25 2018-10-08 18:11:26,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=25 2018-10-08 18:11:26,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=25 2018-10-08 18:11:26,913 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2018-10-08 18:11:27,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=25 2018-10-08 18:11:27,298 INFO [PEWorker-1] procedure2.ProcedureExecutor(1507): Finished pid=25, state=SUCCESS, hasLock=false; CreateNamespaceProcedure, namespace=backup in 856msec 2018-10-08 18:11:27,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=25 2018-10-08 18:11:27,675 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$3(2004): Client=hbase//172.18.128.12 create 'backup:system', {NAME => 'meta', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'session', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-10-08 18:11:27,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=26, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure table=backup:system 2018-10-08 18:11:28,047 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(630): Client=hbase//172.18.128.12 procedure request for creating table: namespace: "backup" qualifier: "system" procId is: 26 2018-10-08 18:11:28,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=26 2018-10-08 18:11:28,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=26 2018-10-08 18:11:28,197 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(320): Archiving region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. from FS 2018-10-08 18:11:28,199 DEBUG [PEWorker-2] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:11:28,200 DEBUG [PEWorker-2] backup.HFileArchiver(146): Directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/backup/system/29493d1f83444b313854401df15f30aa empty. 2018-10-08 18:11:28,201 DEBUG [PEWorker-2] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/backup/system/29493d1f83444b313854401df15f30aa 2018-10-08 18:11:28,201 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(324): Table 'backup:system' archived! 2018-10-08 18:11:28,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=26 2018-10-08 18:11:28,644 DEBUG [PEWorker-2] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/backup/system/.tabledesc/.tableinfo.0000000001 2018-10-08 18:11:28,648 INFO [RegionOpenAndInitThread-backup:system-1] regionserver.HRegion(7043): creating HRegion backup:system HTD == 'backup:system', {NAME => 'meta', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'session', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp Table name == backup:system 2018-10-08 18:11:28,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=26 2018-10-08 18:11:29,084 DEBUG [RegionOpenAndInitThread-backup:system-1] regionserver.HRegion(836): Instantiated backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:29,092 DEBUG [RegionOpenAndInitThread-backup:system-1] regionserver.HRegion(1554): Closing 29493d1f83444b313854401df15f30aa, disabling compactions & flushes 2018-10-08 18:11:29,093 DEBUG [RegionOpenAndInitThread-backup:system-1] regionserver.HRegion(1594): Updates disabled for region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:29,093 INFO [RegionOpenAndInitThread-backup:system-1] regionserver.HRegion(1711): Closed backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:29,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=26 2018-10-08 18:11:29,226 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":1539022289225},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022289225}]},"ts":1539022289225} 2018-10-08 18:11:29,231 INFO [PEWorker-2] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:11:29,390 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022289390}]},"ts":1539022289390} 2018-10-08 18:11:29,395 INFO [PEWorker-2] hbase.MetaTableAccessor(1700): Updated tableName=backup:system, state=ENABLING in hbase:meta 2018-10-08 18:11:29,526 INFO [PEWorker-2] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=27, ppid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN}] 2018-10-08 18:11:29,668 INFO [PEWorker-13] procedure.MasterProcedureScheduler(689): pid=27, ppid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN checking lock on 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:29,838 INFO [PEWorker-13] assignment.TransitRegionStateProcedure(160): Starting pid=27, ppid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN; rit=OFFLINE, location=cn012.l42scl.hortonworks.com,37486,1539022239614; forceNewPlan=false, retain=false 2018-10-08 18:11:29,992 INFO [PEWorker-3] assignment.RegionStateStore(200): pid=27 updating hbase:meta row=29493d1f83444b313854401df15f30aa, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:29,998 INFO [PEWorker-3] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=28, ppid=27, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:11:30,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=26 2018-10-08 18:11:30,411 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] handler.AssignRegionHandler(101): Open backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:30,411 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(7217): Opening region: {ENCODED => 29493d1f83444b313854401df15f30aa, NAME => 'backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:11:30,412 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:11:30,413 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table system 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,413 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(836): Instantiated backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:30,413 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(7256): checking encryption for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,413 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(7261): checking classloading for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,423 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta 2018-10-08 18:11:30,423 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta 2018-10-08 18:11:30,426 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] hfile.CacheConfig(239): Created cacheConfig for meta: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:30,426 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:30,428 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] regionserver.HStore(327): Store=meta, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:30,434 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session 2018-10-08 18:11:30,434 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session 2018-10-08 18:11:30,435 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] hfile.CacheConfig(239): Created cacheConfig for session: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:30,435 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:30,437 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] regionserver.HStore(327): Store=session, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:30,437 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(949): replaying wal for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,442 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,443 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system/29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,443 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(957): stopping wal replay for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,443 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(969): Cleaning up temporary data for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,444 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(980): Cleaning up detritus for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,446 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table backup:system descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0M)) instead. 2018-10-08 18:11:30,449 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(1005): writing seq id for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,456 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system/29493d1f83444b313854401df15f30aa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:11:30,456 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(1009): Opened 29493d1f83444b313854401df15f30aa; next sequenceid=2 2018-10-08 18:11:30,456 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegion(1016): Running coprocessor post-open hooks for 29493d1f83444b313854401df15f30aa 2018-10-08 18:11:30,458 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegionServer(2198): Post open deploy tasks for backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:30,469 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=27 updating hbase:meta row=29493d1f83444b313854401df15f30aa, regionState=OPEN, openSeqNum=2, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:30,474 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-1] regionserver.HRegionServer(2222): Finished post open deploy task for backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:30,474 INFO [RS_OPEN_REGION-regionserver/cn012:0-1] handler.AssignRegionHandler(138): Opened backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:31,080 INFO [PEWorker-14] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=27, ppid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN; resume parent processing. 2018-10-08 18:11:31,080 INFO [PEWorker-14] procedure2.ProcedureExecutor(1507): Finished pid=28, ppid=27, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 765msec 2018-10-08 18:11:31,406 INFO [PEWorker-14] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=26, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, hasLock=false; CreateTableProcedure table=backup:system; resume parent processing. 2018-10-08 18:11:31,407 INFO [PEWorker-14] procedure2.ProcedureExecutor(1507): Finished pid=27, ppid=26, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN in 1.5550sec 2018-10-08 18:11:31,540 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022291540}]},"ts":1539022291540} 2018-10-08 18:11:31,544 INFO [PEWorker-4] hbase.MetaTableAccessor(1700): Updated tableName=backup:system, state=ENABLED in hbase:meta 2018-10-08 18:11:31,988 INFO [PEWorker-4] procedure2.ProcedureExecutor(1507): Finished pid=26, state=SUCCESS, hasLock=false; CreateTableProcedure table=backup:system in 4.0570sec 2018-10-08 18:11:32,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=26 2018-10-08 18:11:32,172 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: CREATE, Table Name: backup:system, procId: 26 completed 2018-10-08 18:11:32,204 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$3(2004): Client=hbase//172.18.128.12 create 'backup:system_bulk', {NAME => 'meta', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'session', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-10-08 18:11:32,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=29, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure table=backup:system_bulk 2018-10-08 18:11:32,515 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(630): Client=hbase//172.18.128.12 procedure request for creating table: namespace: "backup" qualifier: "system_bulk" procId is: 29 2018-10-08 18:11:32,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=29 2018-10-08 18:11:32,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=29 2018-10-08 18:11:32,637 DEBUG [PEWorker-6] procedure.DeleteTableProcedure(320): Archiving region backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. from FS 2018-10-08 18:11:32,641 DEBUG [PEWorker-6] backup.HFileArchiver(112): ARCHIVING hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:11:32,642 DEBUG [PEWorker-6] backup.HFileArchiver(146): Directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/backup/system_bulk/94bac9ca44593231733270505a40a07a empty. 2018-10-08 18:11:32,644 DEBUG [PEWorker-6] backup.HFileArchiver(461): Failed to delete directory hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/backup/system_bulk/94bac9ca44593231733270505a40a07a 2018-10-08 18:11:32,644 DEBUG [PEWorker-6] procedure.DeleteTableProcedure(324): Table 'backup:system_bulk' archived! 2018-10-08 18:11:32,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=29 2018-10-08 18:11:33,100 DEBUG [PEWorker-6] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp/data/backup/system_bulk/.tabledesc/.tableinfo.0000000001 2018-10-08 18:11:33,104 INFO [RegionOpenAndInitThread-backup:system_bulk-1] regionserver.HRegion(7043): creating HRegion backup:system_bulk HTD == 'backup:system_bulk', {NAME => 'meta', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}, {NAME => 'session', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} RootDir = hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.tmp Table name == backup:system_bulk 2018-10-08 18:11:33,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=29 2018-10-08 18:11:33,537 DEBUG [RegionOpenAndInitThread-backup:system_bulk-1] regionserver.HRegion(836): Instantiated backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:33,540 DEBUG [RegionOpenAndInitThread-backup:system_bulk-1] regionserver.HRegion(1554): Closing 94bac9ca44593231733270505a40a07a, disabling compactions & flushes 2018-10-08 18:11:33,540 DEBUG [RegionOpenAndInitThread-backup:system_bulk-1] regionserver.HRegion(1594): Updates disabled for region backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:11:33,540 INFO [RegionOpenAndInitThread-backup:system_bulk-1] regionserver.HRegion(1711): Closed backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:11:33,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=29 2018-10-08 18:11:33,647 DEBUG [PEWorker-6] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":1539022293647},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022293647}]},"ts":1539022293647} 2018-10-08 18:11:33,652 INFO [PEWorker-6] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:11:33,783 DEBUG [PEWorker-6] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system_bulk","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022293783}]},"ts":1539022293783} 2018-10-08 18:11:33,788 INFO [PEWorker-6] hbase.MetaTableAccessor(1700): Updated tableName=backup:system_bulk, state=ENABLING in hbase:meta 2018-10-08 18:11:33,869 INFO [PEWorker-6] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=30, ppid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=backup:system_bulk, region=94bac9ca44593231733270505a40a07a, ASSIGN}] 2018-10-08 18:11:33,996 INFO [PEWorker-16] procedure.MasterProcedureScheduler(689): pid=30, ppid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=backup:system_bulk, region=94bac9ca44593231733270505a40a07a, ASSIGN checking lock on 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,029 INFO [PEWorker-16] assignment.TransitRegionStateProcedure(160): Starting pid=30, ppid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=backup:system_bulk, region=94bac9ca44593231733270505a40a07a, ASSIGN; rit=OFFLINE, location=cn012.l42scl.hortonworks.com,37486,1539022239614; forceNewPlan=false, retain=false 2018-10-08 18:11:34,183 INFO [PEWorker-7] assignment.RegionStateStore(200): pid=30 updating hbase:meta row=94bac9ca44593231733270505a40a07a, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:34,189 INFO [PEWorker-7] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=31, ppid=30, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:11:34,539 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] handler.AssignRegionHandler(101): Open backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:11:34,539 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(7217): Opening region: {ENCODED => 94bac9ca44593231733270505a40a07a, NAME => 'backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:11:34,541 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:11:34,541 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table system_bulk 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,541 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(836): Instantiated backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:11:34,542 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(7256): checking encryption for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,542 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(7261): checking classloading for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,548 DEBUG [StoreOpener-94bac9ca44593231733270505a40a07a-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/meta 2018-10-08 18:11:34,548 DEBUG [StoreOpener-94bac9ca44593231733270505a40a07a-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/meta 2018-10-08 18:11:34,549 INFO [StoreOpener-94bac9ca44593231733270505a40a07a-1] hfile.CacheConfig(239): Created cacheConfig for meta: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:34,550 INFO [StoreOpener-94bac9ca44593231733270505a40a07a-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:34,551 INFO [StoreOpener-94bac9ca44593231733270505a40a07a-1] regionserver.HStore(327): Store=meta, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:34,553 DEBUG [StoreOpener-94bac9ca44593231733270505a40a07a-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/session 2018-10-08 18:11:34,553 DEBUG [StoreOpener-94bac9ca44593231733270505a40a07a-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/session 2018-10-08 18:11:34,554 INFO [StoreOpener-94bac9ca44593231733270505a40a07a-1] hfile.CacheConfig(239): Created cacheConfig for session: blockCache=LruBlockCache{blockCount=0, currentSize=747.70 KB, freeSize=994.87 MB, maxSize=995.60 MB, heapSize=747.70 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:34,554 INFO [StoreOpener-94bac9ca44593231733270505a40a07a-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:11:34,555 INFO [StoreOpener-94bac9ca44593231733270505a40a07a-1] regionserver.HStore(327): Store=session, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:11:34,555 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(949): replaying wal for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,559 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,560 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system_bulk/94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,560 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(957): stopping wal replay for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,560 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(969): Cleaning up temporary data for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,561 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(980): Cleaning up detritus for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,563 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table backup:system_bulk descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0M)) instead. 2018-10-08 18:11:34,563 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(1005): writing seq id for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,569 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system_bulk/94bac9ca44593231733270505a40a07a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2018-10-08 18:11:34,569 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(1009): Opened 94bac9ca44593231733270505a40a07a; next sequenceid=2 2018-10-08 18:11:34,569 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegion(1016): Running coprocessor post-open hooks for 94bac9ca44593231733270505a40a07a 2018-10-08 18:11:34,571 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegionServer(2198): Post open deploy tasks for backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:11:34,578 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=30 updating hbase:meta row=94bac9ca44593231733270505a40a07a, regionState=OPEN, openSeqNum=2, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:34,582 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-2] regionserver.HRegionServer(2222): Finished post open deploy task for backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:11:34,582 INFO [RS_OPEN_REGION-regionserver/cn012:0-2] handler.AssignRegionHandler(138): Opened backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:11:34,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=29 2018-10-08 18:11:35,289 INFO [PEWorker-8] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=30, ppid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=backup:system_bulk, region=94bac9ca44593231733270505a40a07a, ASSIGN; resume parent processing. 2018-10-08 18:11:35,289 INFO [PEWorker-8] procedure2.ProcedureExecutor(1507): Finished pid=31, ppid=30, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 614msec 2018-10-08 18:11:35,595 INFO [PEWorker-10] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=29, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, hasLock=false; CreateTableProcedure table=backup:system_bulk; resume parent processing. 2018-10-08 18:11:35,596 INFO [PEWorker-10] procedure2.ProcedureExecutor(1507): Finished pid=30, ppid=29, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=backup:system_bulk, region=94bac9ca44593231733270505a40a07a, ASSIGN in 1.4200sec 2018-10-08 18:11:35,766 DEBUG [PEWorker-11] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system_bulk","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022295766}]},"ts":1539022295766} 2018-10-08 18:11:35,771 INFO [PEWorker-11] hbase.MetaTableAccessor(1700): Updated tableName=backup:system_bulk, state=ENABLED in hbase:meta 2018-10-08 18:11:35,921 WARN [HBase-Metrics2-1] impl.MetricsConfig(134): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2018-10-08 18:11:36,189 INFO [PEWorker-11] procedure2.ProcedureExecutor(1507): Finished pid=29, state=SUCCESS, hasLock=false; CreateTableProcedure table=backup:system_bulk in 3.6950sec 2018-10-08 18:11:36,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=29 2018-10-08 18:11:36,643 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: CREATE, Table Name: backup:system_bulk, procId: 29 completed 2018-10-08 18:11:36,657 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:36,657 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:36,663 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:36,663 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:36,666 DEBUG [Time-limited test] impl.BackupSystemTable(587): Start new backup exclusive operation 2018-10-08 18:11:36,725 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1492): Client=hbase//172.18.128.12 snapshot request for:{ ss=snapshot_backup_system table=backup:system type=FLUSH } 2018-10-08 18:11:36,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotDescriptionUtils(313): Creation time not specified, setting to:1539022296725 (current time:1539022296725). 2018-10-08 18:11:36,726 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(139): Connect 0x791d5bc9 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:36,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16785c41, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:36,808 INFO [RS-EventLoopGroup-3-17] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:57982, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:36,811 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x791d5bc9 to localhost:54078 2018-10-08 18:11:36,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:36,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(565): No existing snapshot, attempting snapshot... 2018-10-08 18:11:36,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(613): Table enabled, starting distributed snapshot. 2018-10-08 18:11:37,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=32, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:11:37,010 DEBUG [PEWorker-12] locking.LockProcedure(309): LOCKED pid=32, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:11:37,099 INFO [PEWorker-12] procedure2.TimeoutExecutorThread(82): ADDED pid=32, state=WAITING_TIMEOUT, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE; timeout=600000, timestamp=1539022897098 2018-10-08 18:11:37,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(615): Started snapshot: { ss=snapshot_backup_system table=backup:system type=FLUSH } 2018-10-08 18:11:37,103 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(175): Running FLUSH table snapshot snapshot_backup_system C_M_SNAPSHOT_TABLE on table backup:system 2018-10-08 18:11:37,114 DEBUG [Time-limited test] client.HBaseAdmin(2585): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_backup_system table=backup:system type=FLUSH }'' to complete. (max 6666 ms per retry) 2018-10-08 18:11:37,114 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#1) Sleeping: 100ms while waiting for snapshot completion. 2018-10-08 18:11:37,215 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:37,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:37,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:37,222 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#2) Sleeping: 200ms while waiting for snapshot completion. 2018-10-08 18:11:37,423 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:37,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:37,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:37,426 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#3) Sleeping: 300ms while waiting for snapshot completion. 2018-10-08 18:11:37,547 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_backup_system 2018-10-08 18:11:37,547 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_backup_system' 2018-10-08 18:11:37,547 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2018-10-08 18:11:37,548 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(204): Procedure 'snapshot_backup_system' starting 'acquire' 2018-10-08 18:11:37,548 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_backup_system', kicking off acquire phase on members. 2018-10-08 18:11:37,549 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:37,549 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(95): Creating acquire znode:/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:37,558 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2018-10-08 18:11:37,558 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(103): Watching for acquire node:/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:37,558 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/online-snapshot/acquired 2018-10-08 18:11:37,558 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2018-10-08 18:11:37,559 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:37,559 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2018-10-08 18:11:37,559 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(187): Found procedure znode: /1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:37,560 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:37,560 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(213): start proc data length is 54 2018-10-08 18:11:37,560 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(215): Found data for znode:/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:37,561 DEBUG [Time-limited test-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_backup_system from table backup:system type FLUSH 2018-10-08 18:11:37,564 DEBUG [Time-limited test-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_backup_system 2018-10-08 18:11:37,564 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(159): Starting subprocedure 'snapshot_backup_system' with timeout 300000ms 2018-10-08 18:11:37,565 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2018-10-08 18:11:37,568 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_backup_system' starting 'acquire' stage 2018-10-08 18:11:37,568 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(169): Subprocedure 'snapshot_backup_system' locally acquired 2018-10-08 18:11:37,568 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(244): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure (snapshot_backup_system) in zk 2018-10-08 18:11:37,575 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:37,575 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(252): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:37,575 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:37,576 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:37,576 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:37,577 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:37,577 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(174): Subprocedure 'snapshot_backup_system' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2018-10-08 18:11:37,577 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:37,578 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:37,578 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:37,579 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:37,579 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:37,580 DEBUG [Time-limited test-EventThread] procedure.Procedure(298): member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure 'snapshot_backup_system' on coordinator 2018-10-08 18:11:37,581 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(212): Procedure 'snapshot_backup_system' starting 'in-barrier' execution. 2018-10-08 18:11:37,581 DEBUG [Time-limited test-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@1f11ec0f[Count = 0] remaining members to acquire global barrier 2018-10-08 18:11:37,581 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(119): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:37,591 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:37,591 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:37,591 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(129): Received reached global barrier:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:37,591 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_backup_system' received 'reached' from coordinator. 2018-10-08 18:11:37,592 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:37,592 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2018-10-08 18:11:37,593 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.FlushSnapshotSubprocedure(171): Flush Snapshot Tasks submitted for 1 regions 2018-10-08 18:11:37,593 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(317): Waiting for local region snapshots to finish. 2018-10-08 18:11:37,595 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(98): Starting snapshot operation on backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:37,596 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(111): Flush Snapshotting region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. started... 2018-10-08 18:11:37,599 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] regionserver.HRegion(2647): Flushing 2/2 column families, dataSize=45 B heapSize=632 B 2018-10-08 18:11:37,726 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:37,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:37,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:37,730 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#4) Sleeping: 500ms while waiting for snapshot completion. 2018-10-08 18:11:38,100 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=45 B at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/71375c40605f4c24904246837fdc4949 2018-10-08 18:11:38,166 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/71375c40605f4c24904246837fdc4949 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949 2018-10-08 18:11:38,178 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949, entries=1, sequenceid=5, filesize=4.8 K 2018-10-08 18:11:38,191 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] regionserver.HRegion(2856): Finished flush of dataSize ~45 B/45, heapSize ~360 B/360, currentSize=0 B/0 for 29493d1f83444b313854401df15f30aa in 592ms, sequenceid=5, compaction requested=false 2018-10-08 18:11:38,193 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] regionserver.MetricsTableSourceImpl(124): Creating new MetricsTableSourceImpl for table 'backup:system' 2018-10-08 18:11:38,196 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] regionserver.HRegion(2362): Flush status journal: Acquiring readlock on region at 1539022297597 Running coprocessor pre-flush hooks at 1539022297598 Obtaining lock to block concurrent updates at 1539022297599 Preparing flush snapshotting stores in 29493d1f83444b313854401df15f30aa at 1539022297599 Finished memstore snapshotting backup:system,,1539022287674.29493d1f83444b313854401df15f30aa., syncing WAL and waiting on mvcc, flushsize=dataSize=45, getHeapSize=600, getOffHeapSize=0 at 1539022297614 Flushing stores of backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. at 1539022297616 Flushing session: creating writer at 1539022297620 Flushing session: appending metadata at 1539022297661 Flushing session: closing flushed file at 1539022297661 Flushing session: reopening flushed file at 1539022298168 Finished flush of dataSize ~45 B/45, heapSize ~360 B/360, currentSize=0 B/0 for 29493d1f83444b313854401df15f30aa in 592ms, sequenceid=5, compaction requested=false at 1539022298191 Running post-flush coprocessor hooks at 1539022298196 Flush successful at 1539022298196 2018-10-08 18:11:38,197 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.SnapshotManifest(235): Storing 'backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.' region-info for snapshot. 2018-10-08 18:11:38,204 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.SnapshotManifest(240): Creating references for hfiles 2018-10-08 18:11:38,208 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.SnapshotManifest(250): Adding snapshot references for [] hfiles 2018-10-08 18:11:38,208 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.SnapshotManifest(250): Adding snapshot references for [hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949] hfiles 2018-10-08 18:11:38,208 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.SnapshotManifest(259): Adding reference for file (1/1): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949 2018-10-08 18:11:38,230 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:38,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:38,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:38,244 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#5) Sleeping: 1000ms while waiting for snapshot completion. 2018-10-08 18:11:38,647 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(138): ... Flush Snapshotting region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. completed. 2018-10-08 18:11:38,647 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool7-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(141): Closing snapshot operation on backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:38,648 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(328): Completed 1/1 local region snapshots. 2018-10-08 18:11:38,649 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(330): Completed 1 local region snapshots. 2018-10-08 18:11:38,649 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(362): cancelling 0 tasks for snapshot cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,649 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(190): Subprocedure 'snapshot_backup_system' locally completed 2018-10-08 18:11:38,649 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(268): Marking procedure 'snapshot_backup_system' completed for member 'cn012.l42scl.hortonworks.com,37486,1539022239614' in zk 2018-10-08 18:11:38,669 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,669 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(195): Subprocedure 'snapshot_backup_system' has notified controller of completion 2018-10-08 18:11:38,669 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,670 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:38,670 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:38,669 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:38,670 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(220): Subprocedure 'snapshot_backup_system' completed. 2018-10-08 18:11:38,672 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:38,673 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:38,674 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:38,674 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,675 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:38,675 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:38,676 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,677 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(223): Finished data from procedure 'snapshot_backup_system' member 'cn012.l42scl.hortonworks.com,37486,1539022239614': 2018-10-08 18:11:38,677 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(221): Procedure 'snapshot_backup_system' execution completed 2018-10-08 18:11:38,678 DEBUG [Time-limited test-EventThread] procedure.Procedure(329): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' released barrier for procedure'snapshot_backup_system', counting down latch. Waiting for 0 more 2018-10-08 18:11:38,678 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(230): Running finish phase. 2018-10-08 18:11:38,678 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2018-10-08 18:11:38,678 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(166): Attempting to clean out zk node for op:snapshot_backup_system 2018-10-08 18:11:38,678 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureUtil(286): Clearing all znodes for procedure snapshot_backup_systemincluding nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2018-10-08 18:11:38,727 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:38,727 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:38,727 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:38,728 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:38,728 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:38,728 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:38,728 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:38,729 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2018-10-08 18:11:38,729 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:38,729 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,729 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/online-snapshot/abort 2018-10-08 18:11:38,729 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2018-10-08 18:11:38,730 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:38,730 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:38,730 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:38,731 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:38,731 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,731 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,732 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:38,733 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:38,733 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,791 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:38,792 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/online-snapshot/acquired 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:38,792 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:38,792 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for snapshot_backup_system 2018-10-08 18:11:38,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:38,795 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2018-10-08 18:11:38,795 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/online-snapshot/abort 2018-10-08 18:11:38,795 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2018-10-08 18:11:38,795 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifest(478): Convert to Single Snapshot Manifest 2018-10-08 18:11:38,802 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifestV1(128): No regions under directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system 2018-10-08 18:11:39,245 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:39,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:39,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:39,247 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#6) Sleeping: 2000ms while waiting for snapshot completion. 2018-10-08 18:11:39,264 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(265): Sentinel is done, just moving the snapshot from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/snapshot_backup_system 2018-10-08 18:11:40,131 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(222): Snapshot snapshot_backup_system of table backup:system completed 2018-10-08 18:11:40,131 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(235): Launching cleanup of working dir:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system 2018-10-08 18:11:40,132 ERROR [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(240): Couldn't delete snapshot working directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system 2018-10-08 18:11:40,136 DEBUG [PEWorker-1] locking.LockProcedure(240): UNLOCKED pid=32, state=RUNNABLE, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:11:40,331 INFO [PEWorker-1] procedure2.ProcedureExecutor(1507): Finished pid=32, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE in 3.3040sec 2018-10-08 18:11:41,248 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:41,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:41,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(385): Snapshot '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' has completed, notifying client. 2018-10-08 18:11:41,252 INFO [Time-limited test] impl.TableBackupClient(120): Backup backup_1539022286146 started at 1539022301251. 2018-10-08 18:11:41,274 DEBUG [Time-limited test] impl.TableBackupClient(124): Backup session backup_1539022286146 has been started. 2018-10-08 18:11:41,279 INFO [Time-limited test] impl.FullTableBackupClient(144): Execute roll log procedure for full backup ... 2018-10-08 18:11:41,295 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(880): Client=hbase//172.18.128.12 procedure request for: rolllog-proc 2018-10-08 18:11:41,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2018-10-08 18:11:41,297 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2018-10-08 18:11:41,298 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 180000 ms 2018-10-08 18:11:41,299 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2018-10-08 18:11:41,299 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2018-10-08 18:11:41,300 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,300 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(95): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:41,308 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2018-10-08 18:11:41,308 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(103): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,308 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/rolllog-proc/acquired 2018-10-08 18:11:41,308 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2018-10-08 18:11:41,308 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,308 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2018-10-08 18:11:41,309 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(187): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:41,309 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,310 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(213): start proc data length is 35 2018-10-08 18:11:41,310 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(215): Found data for znode:/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:41,310 INFO [Time-limited test-EventThread] regionserver.LogRollRegionServerProcedureManager(128): Attempting to run a roll log procedure for backup. 2018-10-08 18:11:41,312 INFO [Time-limited test-EventThread] regionserver.LogRollBackupSubprocedure(57): Constructing a LogRollBackupSubprocedure. 2018-10-08 18:11:41,312 DEBUG [Time-limited test-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2018-10-08 18:11:41,312 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(159): Starting subprocedure 'rolllog' with timeout 60000ms 2018-10-08 18:11:41,313 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2018-10-08 18:11:41,314 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' starting 'acquire' stage 2018-10-08 18:11:41,315 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(169): Subprocedure 'rolllog' locally acquired 2018-10-08 18:11:41,315 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(244): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure (rolllog) in zk 2018-10-08 18:11:41,324 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,324 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(252): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,324 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,325 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:41,325 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/rolllog-proc 2018-10-08 18:11:41,325 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,325 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(174): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2018-10-08 18:11:41,325 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:41,326 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:41,326 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:41,326 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,327 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:41,327 DEBUG [Time-limited test-EventThread] procedure.Procedure(298): member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure 'rolllog' on coordinator 2018-10-08 18:11:41,327 DEBUG [Time-limited test-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@6576caa5[Count = 0] remaining members to acquire global barrier 2018-10-08 18:11:41,327 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2018-10-08 18:11:41,327 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(119): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,333 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,333 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,333 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(129): Received reached global barrier:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,333 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' received 'reached' from coordinator. 2018-10-08 18:11:41,333 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,333 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2018-10-08 18:11:41,334 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] regionserver.LogRollBackupSubprocedurePool(86): Waiting for backup procedure to finish. 2018-10-08 18:11:41,335 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(76): DRPC started: cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,347 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(93): Trying to roll log in backup subprocedure, current log number: 1539022249231 highest: 1539022249231 on cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,347 DEBUG [regionserver/cn012:0.logRoller] regionserver.LogRoller(178): WAL roll requested 2018-10-08 18:11:41,358 DEBUG [RS-EventLoopGroup-3-18] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK] 2018-10-08 18:11:41,370 INFO [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(680): Rolled WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022246561.meta with entries=36, filesize=11.16 KB; new WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022301347.meta 2018-10-08 18:11:41,371 DEBUG [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(773): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK]] 2018-10-08 18:11:41,384 DEBUG [RS-EventLoopGroup-3-20] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK] 2018-10-08 18:11:41,393 INFO [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(680): Rolled WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022249231 with entries=19, filesize=3.92 KB; new WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 2018-10-08 18:11:41,393 DEBUG [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(773): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK]] 2018-10-08 18:11:41,400 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022249231 not finished, retry = 0 2018-10-08 18:11:41,408 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(100): log roll took 61 2018-10-08 18:11:41,408 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(101): After roll log in backup subprocedure, current log number: 1539022301371 on cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,419 INFO [RS-EventLoopGroup-1-5] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:42790, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase.hfs.0 (auth:SIMPLE), service=MasterService 2018-10-08 18:11:41,438 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:41,440 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:41,444 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:41,444 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:41,452 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(190): Subprocedure 'rolllog' locally completed 2018-10-08 18:11:41,452 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(268): Marking procedure 'rolllog' completed for member 'cn012.l42scl.hortonworks.com,37486,1539022239614' in zk 2018-10-08 18:11:41,459 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,459 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(195): Subprocedure 'rolllog' has notified controller of completion 2018-10-08 18:11:41,459 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:41,459 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,459 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:41,460 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/rolllog-proc 2018-10-08 18:11:41,459 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(220): Subprocedure 'rolllog' completed. 2018-10-08 18:11:41,461 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:41,462 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:41,462 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:41,463 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,463 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:41,464 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:41,464 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,465 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(223): Finished data from procedure 'rolllog' member 'cn012.l42scl.hortonworks.com,37486,1539022239614': 2018-10-08 18:11:41,465 DEBUG [Time-limited test-EventThread] procedure.Procedure(329): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2018-10-08 18:11:41,465 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2018-10-08 18:11:41,465 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(230): Running finish phase. 2018-10-08 18:11:41,465 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2018-10-08 18:11:41,465 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(166): Attempting to clean out zk node for op:rolllog 2018-10-08 18:11:41,465 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureUtil(286): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2018-10-08 18:11:41,474 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,474 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,475 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,475 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:41,475 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/rolllog-proc 2018-10-08 18:11:41,475 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,475 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,475 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:41,475 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2018-10-08 18:11:41,476 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,476 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/rolllog-proc/abort 2018-10-08 18:11:41,476 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2018-10-08 18:11:41,476 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:41,476 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,476 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:41,477 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:41,477 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,477 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,478 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:41,478 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:41,478 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,491 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2018-10-08 18:11:41,491 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/rolllog-proc/acquired 2018-10-08 18:11:41,491 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2018-10-08 18:11:41,491 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:41,492 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.LogRollMasterProcedureManager(146): Done waiting - exec procedure for rolllog 2018-10-08 18:11:41,492 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(614): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Unable to get data of znode /1/rolllog-proc/abort/rolllog because node does not exist (not an error) 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2018-10-08 18:11:41,494 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/rolllog-proc/abort 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2018-10-08 18:11:41,493 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.LogRollMasterProcedureManager(147): Distributed roll log procedure is successful! 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:41,494 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:41,497 DEBUG [Time-limited test] client.HBaseAdmin(2859): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 6666 ms per retry) 2018-10-08 18:11:41,497 DEBUG [Time-limited test] client.HBaseAdmin(2868): (#1) Sleeping: 100ms while waiting for procedure completion. 2018-10-08 18:11:41,598 DEBUG [Time-limited test] client.HBaseAdmin(2874): Getting current status of procedure from master... 2018-10-08 18:11:41,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1132): Checking to see if procedure from request:rolllog-proc is done 2018-10-08 18:11:41,628 DEBUG [Time-limited test] impl.BackupSystemTable(1021): add :hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022249231 2018-10-08 18:11:41,694 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1492): Client=hbase//172.18.128.12 snapshot request for:{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } 2018-10-08 18:11:41,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotDescriptionUtils(313): Creation time not specified, setting to:1539022301694 (current time:1539022301694). 2018-10-08 18:11:41,695 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(139): Connect 0x3307269c to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:41,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ab48477, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:41,722 INFO [RS-EventLoopGroup-3-24] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58070, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:41,725 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x3307269c to localhost:54078 2018-10-08 18:11:41,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:41,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(565): No existing snapshot, attempting snapshot... 2018-10-08 18:11:41,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(613): Table enabled, starting distributed snapshot. 2018-10-08 18:11:41,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=33, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=test-1539022262249, type=EXCLUSIVE 2018-10-08 18:11:41,977 DEBUG [PEWorker-2] locking.LockProcedure(309): LOCKED pid=33, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=test-1539022262249, type=EXCLUSIVE 2018-10-08 18:11:42,125 INFO [PEWorker-2] procedure2.TimeoutExecutorThread(82): ADDED pid=33, state=WAITING_TIMEOUT, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=test-1539022262249, type=EXCLUSIVE; timeout=600000, timestamp=1539022902125 2018-10-08 18:11:42,125 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(175): Running FLUSH table snapshot snapshot_1539022301692_default_test-1539022262249 C_M_SNAPSHOT_TABLE on table test-1539022262249 2018-10-08 18:11:42,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(615): Started snapshot: { ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } 2018-10-08 18:11:42,127 DEBUG [Time-limited test] client.HBaseAdmin(2585): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH }'' to complete. (max 6666 ms per retry) 2018-10-08 18:11:42,127 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#1) Sleeping: 100ms while waiting for snapshot completion. 2018-10-08 18:11:42,227 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:42,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } is done 2018-10-08 18:11:42,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH }' is still in progress! 2018-10-08 18:11:42,231 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#2) Sleeping: 200ms while waiting for snapshot completion. 2018-10-08 18:11:42,431 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:42,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } is done 2018-10-08 18:11:42,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH }' is still in progress! 2018-10-08 18:11:42,435 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#3) Sleeping: 300ms while waiting for snapshot completion. 2018-10-08 18:11:42,557 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,557 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_1539022301692_default_test-1539022262249' 2018-10-08 18:11:42,557 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2018-10-08 18:11:42,557 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(204): Procedure 'snapshot_1539022301692_default_test-1539022262249' starting 'acquire' 2018-10-08 18:11:42,557 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_1539022301692_default_test-1539022262249', kicking off acquire phase on members. 2018-10-08 18:11:42,558 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,558 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(95): Creating acquire znode:/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,566 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2018-10-08 18:11:42,566 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(103): Watching for acquire node:/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:42,566 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/online-snapshot/acquired 2018-10-08 18:11:42,566 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2018-10-08 18:11:42,567 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:42,567 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2018-10-08 18:11:42,567 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(187): Found procedure znode: /1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,567 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,568 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(213): start proc data length is 86 2018-10-08 18:11:42,568 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(215): Found data for znode:/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,568 DEBUG [Time-limited test-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_1539022301692_default_test-1539022262249 from table test-1539022262249 type FLUSH 2018-10-08 18:11:42,569 DEBUG [Time-limited test-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,569 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(159): Starting subprocedure 'snapshot_1539022301692_default_test-1539022262249' with timeout 300000ms 2018-10-08 18:11:42,569 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2018-10-08 18:11:42,571 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_1539022301692_default_test-1539022262249' starting 'acquire' stage 2018-10-08 18:11:42,571 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(169): Subprocedure 'snapshot_1539022301692_default_test-1539022262249' locally acquired 2018-10-08 18:11:42,571 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(244): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure (snapshot_1539022301692_default_test-1539022262249) in zk 2018-10-08 18:11:42,583 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:42,583 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(252): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,583 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:42,583 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:42,583 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:42,584 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,584 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(174): Subprocedure 'snapshot_1539022301692_default_test-1539022262249' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2018-10-08 18:11:42,584 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:42,584 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:42,585 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,585 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:42,585 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:42,586 DEBUG [Time-limited test-EventThread] procedure.Procedure(298): member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure 'snapshot_1539022301692_default_test-1539022262249' on coordinator 2018-10-08 18:11:42,586 DEBUG [Time-limited test-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@9fb2ccb[Count = 0] remaining members to acquire global barrier 2018-10-08 18:11:42,586 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(212): Procedure 'snapshot_1539022301692_default_test-1539022262249' starting 'in-barrier' execution. 2018-10-08 18:11:42,586 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(119): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,635 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,635 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,635 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(129): Received reached global barrier:/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:42,636 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_1539022301692_default_test-1539022262249' received 'reached' from coordinator. 2018-10-08 18:11:42,636 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:42,636 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2018-10-08 18:11:42,636 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.FlushSnapshotSubprocedure(171): Flush Snapshot Tasks submitted for 1 regions 2018-10-08 18:11:42,636 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(317): Waiting for local region snapshots to finish. 2018-10-08 18:11:42,636 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(98): Starting snapshot operation on test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:42,637 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(111): Flush Snapshotting region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. started... 2018-10-08 18:11:42,639 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] regionserver.HRegion(2647): Flushing 1/1 column families, dataSize=3.17 KB heapSize=11 KB 2018-10-08 18:11:42,735 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:42,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } is done 2018-10-08 18:11:42,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH }' is still in progress! 2018-10-08 18:11:42,739 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#4) Sleeping: 500ms while waiting for snapshot completion. 2018-10-08 18:11:43,079 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=3.17 KB at sequenceid=103 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/.tmp/f/478561ad4a494ce18ed12081282156be 2018-10-08 18:11:43,096 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/.tmp/f/478561ad4a494ce18ed12081282156be as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/478561ad4a494ce18ed12081282156be 2018-10-08 18:11:43,110 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/478561ad4a494ce18ed12081282156be, entries=99, sequenceid=103, filesize=8.1 K 2018-10-08 18:11:43,112 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] regionserver.HRegion(2856): Finished flush of dataSize ~3.17 KB/3247, heapSize ~10.98 KB/11248, currentSize=0 B/0 for be1bf5445faddb63e45726410a07917a in 473ms, sequenceid=103, compaction requested=false 2018-10-08 18:11:43,112 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] regionserver.MetricsTableSourceImpl(124): Creating new MetricsTableSourceImpl for table 'test-1539022262249' 2018-10-08 18:11:43,113 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] regionserver.HRegion(2362): Flush status journal: Acquiring readlock on region at 1539022302637 Running coprocessor pre-flush hooks at 1539022302637 Obtaining lock to block concurrent updates at 1539022302640 Preparing flush snapshotting stores in be1bf5445faddb63e45726410a07917a at 1539022302640 Finished memstore snapshotting test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a., syncing WAL and waiting on mvcc, flushsize=dataSize=3247, getHeapSize=11248, getOffHeapSize=0 at 1539022302640 Flushing stores of test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. at 1539022302644 Flushing f: creating writer at 1539022302645 Flushing f: appending metadata at 1539022302660 Flushing f: closing flushed file at 1539022302660 Flushing f: reopening flushed file at 1539022303098 Finished flush of dataSize ~3.17 KB/3247, heapSize ~10.98 KB/11248, currentSize=0 B/0 for be1bf5445faddb63e45726410a07917a in 473ms, sequenceid=103, compaction requested=false at 1539022303112 Running post-flush coprocessor hooks at 1539022303113 Flush successful at 1539022303113 2018-10-08 18:11:43,114 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.SnapshotManifest(235): Storing 'test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a.' region-info for snapshot. 2018-10-08 18:11:43,114 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.SnapshotManifest(240): Creating references for hfiles 2018-10-08 18:11:43,114 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.SnapshotManifest(250): Adding snapshot references for [hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/478561ad4a494ce18ed12081282156be] hfiles 2018-10-08 18:11:43,114 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.SnapshotManifest(259): Adding reference for file (1/1): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/478561ad4a494ce18ed12081282156be 2018-10-08 18:11:43,239 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:43,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } is done 2018-10-08 18:11:43,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH }' is still in progress! 2018-10-08 18:11:43,247 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#5) Sleeping: 1000ms while waiting for snapshot completion. 2018-10-08 18:11:43,537 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(138): ... Flush Snapshotting region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. completed. 2018-10-08 18:11:43,537 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool9-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(141): Closing snapshot operation on test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:43,537 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(328): Completed 1/1 local region snapshots. 2018-10-08 18:11:43,538 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(330): Completed 1 local region snapshots. 2018-10-08 18:11:43,538 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(362): cancelling 0 tasks for snapshot cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,538 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(190): Subprocedure 'snapshot_1539022301692_default_test-1539022262249' locally completed 2018-10-08 18:11:43,538 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(268): Marking procedure 'snapshot_1539022301692_default_test-1539022262249' completed for member 'cn012.l42scl.hortonworks.com,37486,1539022239614' in zk 2018-10-08 18:11:43,564 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(195): Subprocedure 'snapshot_1539022301692_default_test-1539022262249' has notified controller of completion 2018-10-08 18:11:43,564 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,564 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,564 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:43,564 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:43,564 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:43,565 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(220): Subprocedure 'snapshot_1539022301692_default_test-1539022262249' completed. 2018-10-08 18:11:43,567 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:43,567 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:43,568 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,568 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,569 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:43,569 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,570 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,571 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(223): Finished data from procedure 'snapshot_1539022301692_default_test-1539022262249' member 'cn012.l42scl.hortonworks.com,37486,1539022239614': 2018-10-08 18:11:43,571 DEBUG [Time-limited test-EventThread] procedure.Procedure(329): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' released barrier for procedure'snapshot_1539022301692_default_test-1539022262249', counting down latch. Waiting for 0 more 2018-10-08 18:11:43,571 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(221): Procedure 'snapshot_1539022301692_default_test-1539022262249' execution completed 2018-10-08 18:11:43,571 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(230): Running finish phase. 2018-10-08 18:11:43,571 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2018-10-08 18:11:43,572 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(166): Attempting to clean out zk node for op:snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,572 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureUtil(286): Clearing all znodes for procedure snapshot_1539022301692_default_test-1539022262249including nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2018-10-08 18:11:43,583 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,583 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,583 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,583 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,583 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:43,583 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,583 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:43,584 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2018-10-08 18:11:43,584 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:43,584 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,584 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/online-snapshot/abort 2018-10-08 18:11:43,584 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,585 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2018-10-08 18:11:43,585 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:43,585 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,585 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,586 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,586 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,586 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:43,587 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,587 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,599 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,601 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2018-10-08 18:11:43,601 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,601 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/online-snapshot/acquired 2018-10-08 18:11:43,603 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2018-10-08 18:11:43,601 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,603 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:43,603 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,603 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,603 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,603 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:43,603 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2018-10-08 18:11:43,603 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:43,603 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/online-snapshot/abort 2018-10-08 18:11:43,605 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifest(478): Convert to Single Snapshot Manifest 2018-10-08 18:11:43,605 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2018-10-08 18:11:43,607 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifestV1(128): No regions under directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:44,062 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(265): Sentinel is done, just moving the snapshot from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_1539022301692_default_test-1539022262249 to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:44,247 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:44,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } is done 2018-10-08 18:11:44,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH }' is still in progress! 2018-10-08 18:11:44,251 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#6) Sleeping: 2000ms while waiting for snapshot completion. 2018-10-08 18:11:44,925 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(222): Snapshot snapshot_1539022301692_default_test-1539022262249 of table test-1539022262249 completed 2018-10-08 18:11:44,925 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(235): Launching cleanup of working dir:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:44,926 ERROR [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(240): Couldn't delete snapshot working directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:44,931 DEBUG [PEWorker-13] locking.LockProcedure(240): UNLOCKED pid=33, state=RUNNABLE, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=test-1539022262249, type=EXCLUSIVE 2018-10-08 18:11:45,132 INFO [PEWorker-13] procedure2.ProcedureExecutor(1507): Finished pid=33, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=test-1539022262249, type=EXCLUSIVE in 3.1960sec 2018-10-08 18:11:46,251 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:46,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH } is done 2018-10-08 18:11:46,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(385): Snapshot '{ ss=snapshot_1539022301692_default_test-1539022262249 table=test-1539022262249 type=FLUSH }' has completed, notifying client. 2018-10-08 18:11:46,256 DEBUG [Time-limited test] impl.FullTableBackupClient(174): snapshot copy for backup_1539022286146 2018-10-08 18:11:46,256 INFO [Time-limited test] impl.FullTableBackupClient(71): Snapshot copy is starting. 2018-10-08 18:11:46,260 DEBUG [Time-limited test] impl.FullTableBackupClient(83): There are 1 snapshots to be copied. 2018-10-08 18:11:46,261 DEBUG [Time-limited test] impl.FullTableBackupClient(98): Setting snapshot copy job name to : Full-Backup_backup_1539022286146_test-1539022262249 2018-10-08 18:11:46,261 DEBUG [Time-limited test] impl.FullTableBackupClient(102): Copy snapshot snapshot_1539022301692_default_test-1539022262249 to hdfs://localhost:41712/backupUT/backup_1539022286146/default/test-1539022262249/ 2018-10-08 18:11:46,277 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob(384): Doing SNAPSHOT_COPY 2018-10-08 18:11:46,308 DEBUG [Time-limited test] snapshot.ExportSnapshot(969): inputFs=hdfs://localhost:41712 inputRoot=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9 2018-10-08 18:11:46,327 DEBUG [Time-limited test] snapshot.ExportSnapshot(973): outputFs=hdfs://localhost:41712 outputRoot=hdfs://localhost:41712/backupUT/backup_1539022286146/default/test-1539022262249 2018-10-08 18:11:46,331 INFO [Time-limited test] snapshot.ExportSnapshot(1034): Copy Snapshot Manifest from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/snapshot_1539022301692_default_test-1539022262249 to hdfs://localhost:41712/backupUT/backup_1539022286146/default/test-1539022262249/.hbase-snapshot/.tmp/snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:46,797 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.HConstants, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-common/3.0.0-SNAPSHOT/hbase-common-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,798 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-protocol/3.0.0-SNAPSHOT/hbase-protocol-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,798 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-protocol-shaded/3.0.0-SNAPSHOT/hbase-protocol-shaded-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,799 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.client.Put, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-client/3.0.0-SNAPSHOT/hbase-client-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,799 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.ipc.RpcServer, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,800 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-hadoop-compat/3.0.0-SNAPSHOT/hbase-hadoop-compat-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,800 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.mapreduce.JobUtil, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-hadoop2-compat/3.0.0-SNAPSHOT/hbase-hadoop2-compat-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,897 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop-5669473616648905130.jar 2018-10-08 18:11:46,898 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.metrics.impl.FastLongHistogram, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-metrics/3.0.0-SNAPSHOT/hbase-metrics-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,899 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.metrics.Snapshot, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-metrics-api/3.0.0-SNAPSHOT/hbase-metrics-api-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,899 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.zookeeper.ZooKeeper, using jar /home/hbase/.m2/repository/org/apache/zookeeper/zookeeper/3.4.10/zookeeper-3.4.10.jar 2018-10-08 18:11:46,900 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hbase.thirdparty.io.netty.channel.Channel, using jar /home/hbase/.m2/repository/org/apache/hbase/thirdparty/hbase-shaded-netty/2.1.0/hbase-shaded-netty-2.1.0.jar 2018-10-08 18:11:46,900 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.google.protobuf.Message, using jar /home/hbase/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2018-10-08 18:11:46,901 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations, using jar /home/hbase/.m2/repository/org/apache/hbase/thirdparty/hbase-shaded-protobuf/2.1.0/hbase-shaded-protobuf-2.1.0.jar 2018-10-08 18:11:46,901 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hbase.thirdparty.com.google.common.collect.Lists, using jar /home/hbase/.m2/repository/org/apache/hbase/thirdparty/hbase-shaded-miscellaneous/2.1.0/hbase-shaded-miscellaneous-2.1.0.jar 2018-10-08 18:11:46,902 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.htrace.core.Tracer, using jar /home/hbase/.m2/repository/org/apache/htrace/htrace-core4/4.2.0-incubating/htrace-core4-4.2.0-incubating.jar 2018-10-08 18:11:46,902 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.codahale.metrics.MetricRegistry, using jar /home/hbase/.m2/repository/io/dropwizard/metrics/metrics-core/3.2.1/metrics-core-3.2.1.jar 2018-10-08 18:11:46,903 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.commons.lang3.ArrayUtils, using jar /home/hbase/.m2/repository/org/apache/commons/commons-lang3/3.6/commons-lang3-3.6.jar 2018-10-08 18:11:46,903 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.fasterxml.jackson.databind.ObjectMapper, using jar /home/hbase/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar 2018-10-08 18:11:46,903 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.fasterxml.jackson.core.Versioned, using jar /home/hbase/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.2/jackson-core-2.9.2.jar 2018-10-08 18:11:46,904 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.fasterxml.jackson.annotation.JsonView, using jar /home/hbase/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.2/jackson-annotations-2.9.2.jar 2018-10-08 18:11:46,904 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.zookeeper.ZKWatcher, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-zookeeper/3.0.0-SNAPSHOT/hbase-zookeeper-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:46,909 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.io.LongWritable, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-common/3.1.1/hadoop-common-3.1.1.jar 2018-10-08 18:11:46,909 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.io.Text, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-common/3.1.1/hadoop-common-3.1.1.jar 2018-10-08 18:11:46,910 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.mapreduce.lib.input.TextInputFormat, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.1/hadoop-mapreduce-client-core-3.1.1.jar 2018-10-08 18:11:46,910 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.io.LongWritable, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-common/3.1.1/hadoop-common-3.1.1.jar 2018-10-08 18:11:46,911 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.io.Text, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-common/3.1.1/hadoop-common-3.1.1.jar 2018-10-08 18:11:46,911 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.1/hadoop-mapreduce-client-core-3.1.1.jar 2018-10-08 18:11:46,912 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.1/hadoop-mapreduce-client-core-3.1.1.jar 2018-10-08 18:11:47,022 WARN [Time-limited test] mapreduce.JobResourceUploader(147): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-10-08 18:11:47,056 WARN [Time-limited test] mapreduce.JobResourceUploader(480): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2018-10-08 18:11:47,072 INFO [Time-limited test] snapshot.ExportSnapshot(574): Loading Snapshot 'snapshot_1539022301692_default_test-1539022262249' hfile list 2018-10-08 18:11:47,083 DEBUG [Time-limited test] snapshot.ExportSnapshot(660): export split=0 size=8.1 K 2018-10-08 18:11:47,427 WARN [Time-limited test] fs.FileUtil(1075): Command 'ln -s /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_local/1539022307339/libjars /mnt/disk2/a/hbase/hbase-backup/libjars/*' failed 1 with: ln: failed to create symbolic link ‘/mnt/disk2/a/hbase/hbase-backup/libjars/*’: No such file or directory 2018-10-08 18:11:47,427 WARN [Time-limited test] mapred.LocalDistributedCacheManager(202): Failed to create symlink: /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_local/1539022307339/libjars <- /mnt/disk2/a/hbase/hbase-backup/libjars/* 2018-10-08 18:11:47,690 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(218): Using bufferSize=128 M 2018-10-08 18:11:47,762 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(446): copy completed for input=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/478561ad4a494ce18ed12081282156be output=hdfs://localhost:41712/backupUT/backup_1539022286146/default/test-1539022262249/archive/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/478561ad4a494ce18ed12081282156be 2018-10-08 18:11:47,763 INFO [LocalJobRunner Map Task Executor #0] snapshot.ExportSnapshot$ExportMapper(447): size=8276 (8.1 K) time=0sec 3.946M/sec 2018-10-08 18:11:48,576 INFO [Time-limited test] snapshot.ExportSnapshot(1083): Finalize the Snapshot Export 2018-10-08 18:11:48,578 INFO [Time-limited test] snapshot.ExportSnapshot(1094): Verify snapshot integrity 2018-10-08 18:11:48,594 INFO [Time-limited test] snapshot.ExportSnapshot(1098): Export Completed: snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:48,595 INFO [Time-limited test] impl.FullTableBackupClient(114): Snapshot copy snapshot_1539022301692_default_test-1539022262249 finished. 2018-10-08 18:11:48,596 DEBUG [Time-limited test] impl.BackupSystemTable(985): test-1539022262249 2018-10-08 18:11:48,630 DEBUG [Time-limited test] impl.BackupManager(282): Getting the direct ancestors of the current backup backup_1539022286146 2018-10-08 18:11:48,630 DEBUG [Time-limited test] impl.BackupManager(288): Current backup is a full backup, no direct ancestor for it. 2018-10-08 18:11:49,058 INFO [Time-limited test] impl.BackupManifest(489): Manifest file stored to hdfs://localhost:41712/backupUT/backup_1539022286146/.backup.manifest 2018-10-08 18:11:49,059 DEBUG [Time-limited test] impl.TableBackupClient(386): Backup backup_1539022286146 finished: type=FULL,tablelist=test-1539022262249,targetRootDir=hdfs://localhost:41712/backupUT,startts=1539022301251,completets=1539022308626,bytescopied=0 2018-10-08 18:11:49,059 DEBUG [Time-limited test] impl.TableBackupClient(143): Trying to delete snapshot for full backup. 2018-10-08 18:11:49,059 DEBUG [Time-limited test] impl.TableBackupClient(148): Trying to delete snapshot: snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:49,080 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(686): Client=hbase//172.18.128.12 delete name: "snapshot_1539022301692_default_test-1539022262249" 2018-10-08 18:11:49,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(315): Deleting snapshot: snapshot_1539022301692_default_test-1539022262249 2018-10-08 18:11:49,093 DEBUG [Time-limited test] impl.TableBackupClient(153): Deleting the snapshot snapshot_1539022301692_default_test-1539022262249 for backup backup_1539022286146 succeeded. 2018-10-08 18:11:49,097 DEBUG [Time-limited test] impl.BackupSystemTable(1665): Deleting snapshot_backup_system from the system 2018-10-08 18:11:49,116 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(686): Client=hbase//172.18.128.12 delete name: "snapshot_backup_system" 2018-10-08 18:11:49,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(315): Deleting snapshot: snapshot_backup_system 2018-10-08 18:11:49,122 DEBUG [Time-limited test] impl.BackupSystemTable(1670): Done deleting backup system table snapshot 2018-10-08 18:11:49,126 DEBUG [Time-limited test] impl.BackupSystemTable(610): Finish backup exclusive operation 2018-10-08 18:11:49,150 INFO [Time-limited test] impl.TableBackupClient(405): Backup backup_1539022286146 completed. 2018-10-08 18:11:49,165 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:49,165 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:49,170 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:49,170 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:49,362 DEBUG [Time-limited test] backup.TestIncrementalBackupWithBulkLoad(94): bulk loading into TestIncBackupDeleteTable 2018-10-08 18:11:49,381 INFO [Time-limited test] hbase.HBaseTestingUtility(462): System.getProperty("hadoop.log.dir") already set to: /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs so I do NOT create it in target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb 2018-10-08 18:11:49,381 INFO [Time-limited test] hbase.HBaseTestingUtility(462): System.getProperty("hadoop.tmp.dir") already set to: /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_tmp so I do NOT create it in target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb 2018-10-08 18:11:49,381 WARN [Time-limited test] hbase.HBaseTestingUtility(466): hadoop.tmp.dir property value differs in configuration and system: Configuration=/tmp/hadoop-hbase while System=/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_tmp Erasing configuration value by system value. 2018-10-08 18:11:49,381 DEBUG [Time-limited test] hbase.HBaseTestingUtility(350): Setting hbase.rootdir to /mnt/disk2/a/hbase/hbase-backup/target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb 2018-10-08 18:11:49,382 INFO [Time-limited test] fs.HFileSystem(348): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2018-10-08 18:11:49,385 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=752.84 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=752.84 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:49,810 INFO [Time-limited test] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=752.84 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=752.84 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:50,292 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x66ae47d1 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:50,309 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71768deb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:50,327 INFO [RS-EventLoopGroup-3-26] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58248, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:50,347 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table test-1539022262249 should be available 2018-10-08 18:11:50,363 INFO [RS-EventLoopGroup-1-6] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:42986, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=MasterService 2018-10-08 18:11:50,418 INFO [LoadIncrementalHFiles-1] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=752.84 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=752.84 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:50,418 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=752.84 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=752.84 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:50,427 INFO [LoadIncrementalHFiles-1] tool.LoadIncrementalHFiles(722): Trying to load hfile=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_1 first=Optional[ddd] last=Optional[ooo] 2018-10-08 18:11:50,427 INFO [LoadIncrementalHFiles-0] tool.LoadIncrementalHFiles(722): Trying to load hfile=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_0 first=Optional[aaaa] last=Optional[cccc] 2018-10-08 18:11:50,445 DEBUG [LoadIncrementalHFiles-2] tool.LoadIncrementalHFiles$2(529): Going to connect to server region=test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a., hostname=cn012.l42scl.hortonworks.com,37486,1539022239614, seqNum=2 for row with hfile group [{f,hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_1}{f,hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_0}] 2018-10-08 18:11:50,474 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(780): Validating hfile at hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_1 for inclusion in store f region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:50,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(793): HFile bounds: first=ddd last=ooo 2018-10-08 18:11:50,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(795): Region bounds: first= last= 2018-10-08 18:11:50,480 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(780): Validating hfile at hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_0 for inclusion in store f region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:11:50,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(793): HFile bounds: first=aaaa last=cccc 2018-10-08 18:11:50,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(795): Region bounds: first= last= 2018-10-08 18:11:50,485 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegion(2647): Flushing 1/1 column families, dataSize=3.46 KB heapSize=11.08 KB 2018-10-08 18:11:50,910 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=3.46 KB at sequenceid=205 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/.tmp/f/82c499178e4f44569b6206459b83b8b4 2018-10-08 18:11:50,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/.tmp/f/82c499178e4f44569b6206459b83b8b4 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/82c499178e4f44569b6206459b83b8b4 2018-10-08 18:11:50,948 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/82c499178e4f44569b6206459b83b8b4, entries=99, sequenceid=205, filesize=8.5 K 2018-10-08 18:11:50,952 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegion(2856): Finished flush of dataSize ~3.46 KB/3544, heapSize ~11.06 KB/11328, currentSize=0 B/0 for be1bf5445faddb63e45726410a07917a in 468ms, sequenceid=205, compaction requested=false 2018-10-08 18:11:50,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegion(2362): Flush status journal: Acquiring readlock on region at 1539022310484 Running coprocessor pre-flush hooks at 1539022310484 Obtaining lock to block concurrent updates at 1539022310485 Preparing flush snapshotting stores in be1bf5445faddb63e45726410a07917a at 1539022310485 Finished memstore snapshotting test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a., syncing WAL and waiting on mvcc, flushsize=dataSize=3544, getHeapSize=11328, getOffHeapSize=0 at 1539022310485 Flushing stores of test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. at 1539022310486 Flushing f: creating writer at 1539022310486 Flushing f: appending metadata at 1539022310498 Flushing f: closing flushed file at 1539022310498 Flushing f: reopening flushed file at 1539022310939 Finished flush of dataSize ~3.46 KB/3544, heapSize ~11.06 KB/11328, currentSize=0 B/0 for be1bf5445faddb63e45726410a07917a in 468ms, sequenceid=205, compaction requested=false at 1539022310952 Running post-flush coprocessor hooks at 1539022310952 Flush successful at 1539022310952 2018-10-08 18:11:50,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.SecureBulkLoadManager$SecureBulkLoadListener(331): Bulk-load file hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_1 is copied to destination staging dir. 2018-10-08 18:11:51,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_1 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ 2018-10-08 18:11:51,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.SecureBulkLoadManager$SecureBulkLoadListener(331): Bulk-load file hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/TestIncBackupDeleteTable/f/hfile_0 is copied to destination staging dir. 2018-10-08 18:11:51,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_0 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ 2018-10-08 18:11:51,798 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] zookeeper.ReadOnlyZKClient(139): Connect 0x3aa84139 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:51,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b41357a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:51,843 INFO [RS-EventLoopGroup-1-7] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:43022, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=MasterService 2018-10-08 18:11:51,869 INFO [RS-EventLoopGroup-3-30] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58290, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:51,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:51,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:51,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:51,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:51,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(391): write bulk load descriptor to backup test-1539022262249 with 2 entries 2018-10-08 18:11:51,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(1695): writing raw bulk path hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ for test-1539022262249 be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:51,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(1695): writing raw bulk path hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ for test-1539022262249 be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:51,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(398): written 2 rows for bulk load of test-1539022262249 2018-10-08 18:11:51,908 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] client.ConnectionImplementation(1801): Closing master protocol: MasterService 2018-10-08 18:11:51,908 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x3aa84139 to localhost:54078 2018-10-08 18:11:51,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:51,913 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(871): Loaded HFile hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_1 into store 'f' as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ - updating store file list. 2018-10-08 18:11:51,925 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStoreFile(445): HFile Bloom filter type for f565f49046b04eecbf8d129eac7a7b88_SeqId_205_: NONE, but ROW specified in column family configuration 2018-10-08 18:11:51,925 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(905): Loaded HFile hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ into store 'f 2018-10-08 18:11:51,926 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(877): Successfully loaded store file hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_1 into store f (new location: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_) 2018-10-08 18:11:51,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.SecureBulkLoadManager$SecureBulkLoadListener(347): Bulk Load done for: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_1 2018-10-08 18:11:51,928 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(871): Loaded HFile hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_0 into store 'f' as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ - updating store file list. 2018-10-08 18:11:51,938 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStoreFile(445): HFile Bloom filter type for 41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_: NONE, but ROW specified in column family configuration 2018-10-08 18:11:51,938 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(905): Loaded HFile hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ into store 'f 2018-10-08 18:11:51,938 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.HStore(877): Successfully loaded store file hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_0 into store f (new location: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_) 2018-10-08 18:11:51,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.SecureBulkLoadManager$SecureBulkLoadListener(347): Bulk Load done for: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj/f/hfile_0 2018-10-08 18:11:51,951 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] zookeeper.ReadOnlyZKClient(139): Connect 0x60912f33 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:51,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2aeccb18, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:51,978 INFO [RS-EventLoopGroup-1-8] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:43030, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase.hfs.0 (auth:SIMPLE), service=MasterService 2018-10-08 18:11:51,993 INFO [RS-EventLoopGroup-3-33] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58298, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase.hfs.0 (auth:SIMPLE), service=ClientService 2018-10-08 18:11:52,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:52,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:52,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:52,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:52,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(371): write bulk load descriptor to backup test-1539022262249 with 1 entries 2018-10-08 18:11:52,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(1615): writing done bulk path hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ for test-1539022262249 be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:52,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(1615): writing done bulk path hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ for test-1539022262249 be1bf5445faddb63e45726410a07917a 2018-10-08 18:11:52,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] impl.BackupSystemTable(377): written 2 rows for bulk load of test-1539022262249 2018-10-08 18:11:52,024 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] client.ConnectionImplementation(1801): Closing master protocol: MasterService 2018-10-08 18:11:52,024 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x60912f33 to localhost:54078 2018-10-08 18:11:52,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:52,041 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486] regionserver.SecureBulkLoadManager(157): Cleaned up hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/staging/hbase__test-1539022262249__fo5bah2ju620b3votcvn0fu177eh2pkllvqfk8bi1hp8ij7s6gg9b75qo0ij90cj successfully. 2018-10-08 18:11:52,045 INFO [Time-limited test] client.ConnectionImplementation(1801): Closing master protocol: MasterService 2018-10-08 18:11:52,046 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x66ae47d1 to localhost:54078 2018-10-08 18:11:52,046 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:52,090 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:52,090 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:52,094 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:52,095 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:52,112 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:52,113 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:52,117 DEBUG [Time-limited test] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:52,118 DEBUG [Time-limited test] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:52,118 DEBUG [Time-limited test] impl.BackupSystemTable(587): Start new backup exclusive operation 2018-10-08 18:11:52,126 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1492): Client=hbase//172.18.128.12 snapshot request for:{ ss=snapshot_backup_system table=backup:system type=FLUSH } 2018-10-08 18:11:52,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotDescriptionUtils(313): Creation time not specified, setting to:1539022312126 (current time:1539022312126). 2018-10-08 18:11:52,127 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(139): Connect 0x36a29cb8 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:52,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32714f76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:52,152 INFO [RS-EventLoopGroup-3-35] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58304, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:52,153 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x36a29cb8 to localhost:54078 2018-10-08 18:11:52,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:52,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(565): No existing snapshot, attempting snapshot... 2018-10-08 18:11:52,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(613): Table enabled, starting distributed snapshot. 2018-10-08 18:11:52,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=34, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:11:52,317 DEBUG [PEWorker-15] locking.LockProcedure(309): LOCKED pid=34, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:11:52,364 INFO [PEWorker-15] procedure2.TimeoutExecutorThread(82): ADDED pid=34, state=WAITING_TIMEOUT, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE; timeout=600000, timestamp=1539022912364 2018-10-08 18:11:52,364 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(175): Running FLUSH table snapshot snapshot_backup_system C_M_SNAPSHOT_TABLE on table backup:system 2018-10-08 18:11:52,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(615): Started snapshot: { ss=snapshot_backup_system table=backup:system type=FLUSH } 2018-10-08 18:11:52,365 DEBUG [Time-limited test] client.HBaseAdmin(2585): Waiting a max of 300000 ms for snapshot '{ ss=snapshot_backup_system table=backup:system type=FLUSH }'' to complete. (max 6666 ms per retry) 2018-10-08 18:11:52,366 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#1) Sleeping: 100ms while waiting for snapshot completion. 2018-10-08 18:11:52,466 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:52,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:52,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:52,468 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#2) Sleeping: 200ms while waiting for snapshot completion. 2018-10-08 18:11:52,668 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:52,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:52,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:52,671 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#3) Sleeping: 300ms while waiting for snapshot completion. 2018-10-08 18:11:52,783 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] procedure.ProcedureCoordinator(177): Submitting procedure snapshot_backup_system 2018-10-08 18:11:52,783 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(196): Starting procedure 'snapshot_backup_system' 2018-10-08 18:11:52,783 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2018-10-08 18:11:52,783 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(204): Procedure 'snapshot_backup_system' starting 'acquire' 2018-10-08 18:11:52,783 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(247): Starting procedure 'snapshot_backup_system', kicking off acquire phase on members. 2018-10-08 18:11:52,784 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:52,784 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(95): Creating acquire znode:/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:52,791 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(103): Watching for acquire node:/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:52,791 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2018-10-08 18:11:52,791 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/online-snapshot/acquired 2018-10-08 18:11:52,791 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2018-10-08 18:11:52,791 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:52,791 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2018-10-08 18:11:52,791 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(187): Found procedure znode: /1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:52,792 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:52,792 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(213): start proc data length is 54 2018-10-08 18:11:52,792 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(215): Found data for znode:/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:52,792 DEBUG [Time-limited test-EventThread] snapshot.RegionServerSnapshotManager(177): Launching subprocedure for snapshot snapshot_backup_system from table backup:system type FLUSH 2018-10-08 18:11:52,793 DEBUG [Time-limited test-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:snapshot_backup_system 2018-10-08 18:11:52,793 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(159): Starting subprocedure 'snapshot_backup_system' with timeout 300000ms 2018-10-08 18:11:52,793 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 300000 ms 2018-10-08 18:11:52,795 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(167): Subprocedure 'snapshot_backup_system' starting 'acquire' stage 2018-10-08 18:11:52,795 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(169): Subprocedure 'snapshot_backup_system' locally acquired 2018-10-08 18:11:52,795 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(244): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure (snapshot_backup_system) in zk 2018-10-08 18:11:52,799 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(252): Watch for global barrier reached:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:52,799 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:52,799 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:52,799 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:52,799 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:52,799 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:52,799 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(174): Subprocedure 'snapshot_backup_system' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2018-10-08 18:11:52,799 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:52,800 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:52,800 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:52,800 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:52,800 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:52,801 DEBUG [Time-limited test-EventThread] procedure.Procedure(298): member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure 'snapshot_backup_system' on coordinator 2018-10-08 18:11:52,801 DEBUG [Time-limited test-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@5e4f12e4[Count = 0] remaining members to acquire global barrier 2018-10-08 18:11:52,801 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(212): Procedure 'snapshot_backup_system' starting 'in-barrier' execution. 2018-10-08 18:11:52,801 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(119): Creating reached barrier zk node:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:52,807 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:52,807 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:52,807 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(129): Received reached global barrier:/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:52,807 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(188): Subprocedure 'snapshot_backup_system' received 'reached' from coordinator. 2018-10-08 18:11:52,807 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:52,808 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2018-10-08 18:11:52,808 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.FlushSnapshotSubprocedure(171): Flush Snapshot Tasks submitted for 1 regions 2018-10-08 18:11:52,808 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(317): Waiting for local region snapshots to finish. 2018-10-08 18:11:52,808 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(98): Starting snapshot operation on backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:52,808 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(111): Flush Snapshotting region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. started... 2018-10-08 18:11:52,808 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.HRegion(2647): Flushing 2/2 column families, dataSize=1.56 KB heapSize=2.94 KB 2018-10-08 18:11:52,971 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:52,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:52,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:52,974 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#4) Sleeping: 500ms while waiting for snapshot completion. 2018-10-08 18:11:53,232 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=1.06 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/meta/b5fc04967580469080d43be998276c7a 2018-10-08 18:11:53,475 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:53,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:53,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:53,478 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#5) Sleeping: 1000ms while waiting for snapshot completion. 2018-10-08 18:11:53,671 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=513 B at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/1c82b52cf5bd40d894d18380212371df 2018-10-08 18:11:53,682 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/meta/b5fc04967580469080d43be998276c7a as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a 2018-10-08 18:11:53,700 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a, entries=7, sequenceid=18, filesize=5.9 K 2018-10-08 18:11:53,702 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/1c82b52cf5bd40d894d18380212371df as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df 2018-10-08 18:11:53,713 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df, entries=2, sequenceid=18, filesize=5.1 K 2018-10-08 18:11:53,715 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.HRegion(2856): Finished flush of dataSize ~1.56 KB/1598, heapSize ~2.91 KB/2976, currentSize=0 B/0 for 29493d1f83444b313854401df15f30aa in 907ms, sequenceid=18, compaction requested=false 2018-10-08 18:11:53,715 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] regionserver.HRegion(2362): Flush status journal: Acquiring readlock on region at 1539022312808 Running coprocessor pre-flush hooks at 1539022312808 Obtaining lock to block concurrent updates at 1539022312808 Preparing flush snapshotting stores in 29493d1f83444b313854401df15f30aa at 1539022312808 Finished memstore snapshotting backup:system,,1539022287674.29493d1f83444b313854401df15f30aa., syncing WAL and waiting on mvcc, flushsize=dataSize=1598, getHeapSize=2976, getOffHeapSize=0 at 1539022312809 Flushing stores of backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. at 1539022312810 Flushing meta: creating writer at 1539022312811 Flushing meta: appending metadata at 1539022312821 Flushing meta: closing flushed file at 1539022312821 Flushing session: creating writer at 1539022313254 Flushing session: appending metadata at 1539022313258 Flushing session: closing flushed file at 1539022313258 Flushing meta: reopening flushed file at 1539022313687 Flushing session: reopening flushed file at 1539022313704 Finished flush of dataSize ~1.56 KB/1598, heapSize ~2.91 KB/2976, currentSize=0 B/0 for 29493d1f83444b313854401df15f30aa in 907ms, sequenceid=18, compaction requested=false at 1539022313715 Running post-flush coprocessor hooks at 1539022313715 Flush successful at 1539022313715 2018-10-08 18:11:53,716 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.SnapshotManifest(235): Storing 'backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.' region-info for snapshot. 2018-10-08 18:11:53,716 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.SnapshotManifest(240): Creating references for hfiles 2018-10-08 18:11:53,716 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.SnapshotManifest(250): Adding snapshot references for [hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a] hfiles 2018-10-08 18:11:53,716 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.SnapshotManifest(259): Adding reference for file (1/1): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a 2018-10-08 18:11:53,717 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.SnapshotManifest(250): Adding snapshot references for [hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949, hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df] hfiles 2018-10-08 18:11:53,717 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.SnapshotManifest(259): Adding reference for file (1/2): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949 2018-10-08 18:11:53,718 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.SnapshotManifest(259): Adding reference for file (2/2): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df 2018-10-08 18:11:54,132 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(138): ... Flush Snapshotting region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. completed. 2018-10-08 18:11:54,132 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-snapshot-pool10-thread-1] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(141): Closing snapshot operation on backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:11:54,132 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(328): Completed 1/1 local region snapshots. 2018-10-08 18:11:54,137 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(330): Completed 1 local region snapshots. 2018-10-08 18:11:54,137 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(362): cancelling 0 tasks for snapshot cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,138 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(190): Subprocedure 'snapshot_backup_system' locally completed 2018-10-08 18:11:54,138 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.ZKProcedureMemberRpcs(268): Marking procedure 'snapshot_backup_system' completed for member 'cn012.l42scl.hortonworks.com,37486,1539022239614' in zk 2018-10-08 18:11:54,151 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,151 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(195): Subprocedure 'snapshot_backup_system' has notified controller of completion 2018-10-08 18:11:54,151 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:54,151 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1] procedure.Subprocedure(220): Subprocedure 'snapshot_backup_system' completed. 2018-10-08 18:11:54,151 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,152 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:54,153 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:54,153 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:54,153 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:54,154 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:54,154 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,155 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:54,155 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:54,155 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,156 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(223): Finished data from procedure 'snapshot_backup_system' member 'cn012.l42scl.hortonworks.com,37486,1539022239614': 2018-10-08 18:11:54,156 DEBUG [Time-limited test-EventThread] procedure.Procedure(329): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' released barrier for procedure'snapshot_backup_system', counting down latch. Waiting for 0 more 2018-10-08 18:11:54,156 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(221): Procedure 'snapshot_backup_system' execution completed 2018-10-08 18:11:54,156 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(230): Running finish phase. 2018-10-08 18:11:54,156 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2018-10-08 18:11:54,156 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureCoordinator(166): Attempting to clean out zk node for op:snapshot_backup_system 2018-10-08 18:11:54,156 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] procedure.ZKProcedureUtil(286): Clearing all znodes for procedure snapshot_backup_systemincluding nodes /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/online-snapshot 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:54,166 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:54,166 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,166 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2018-10-08 18:11:54,167 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:54,167 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/online-snapshot/abort 2018-10-08 18:11:54,167 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2018-10-08 18:11:54,167 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:54,167 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:54,167 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:54,168 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,168 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,168 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:54,168 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----snapshot_backup_system 2018-10-08 18:11:54,169 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,183 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired 2018-10-08 18:11:54,183 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/online-snapshot/acquired 2018-10-08 18:11:54,183 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2018-10-08 18:11:54,183 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:54,183 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for snapshot_backup_system 2018-10-08 18:11:54,184 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(614): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Unable to get data of znode /1/online-snapshot/abort/snapshot_backup_system because node does not exist (not an error) 2018-10-08 18:11:54,184 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,184 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifest(478): Convert to Single Snapshot Manifest 2018-10-08 18:11:54,184 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:54,184 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/abort 2018-10-08 18:11:54,185 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/acquired/snapshot_backup_system 2018-10-08 18:11:54,185 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/online-snapshot/abort 2018-10-08 18:11:54,185 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:54,185 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2018-10-08 18:11:54,185 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:54,185 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/reached/snapshot_backup_system 2018-10-08 18:11:54,185 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/online-snapshot/abort/snapshot_backup_system 2018-10-08 18:11:54,193 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifestV1(128): No regions under directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system 2018-10-08 18:11:54,478 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:54,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:54,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:11:54,481 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#6) Sleeping: 2000ms while waiting for snapshot completion. 2018-10-08 18:11:54,629 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(265): Sentinel is done, just moving the snapshot from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/snapshot_backup_system 2018-10-08 18:11:55,469 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(222): Snapshot snapshot_backup_system of table backup:system completed 2018-10-08 18:11:55,470 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(235): Launching cleanup of working dir:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system 2018-10-08 18:11:55,471 ERROR [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(240): Couldn't delete snapshot working directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/snapshot_backup_system 2018-10-08 18:11:55,477 DEBUG [PEWorker-3] locking.LockProcedure(240): UNLOCKED pid=34, state=RUNNABLE, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:11:55,761 INFO [PEWorker-3] procedure2.ProcedureExecutor(1507): Finished pid=34, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE in 3.3200sec 2018-10-08 18:11:56,482 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:11:56,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=snapshot_backup_system table=backup:system type=FLUSH } is done 2018-10-08 18:11:56,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(385): Snapshot '{ ss=snapshot_backup_system table=backup:system type=FLUSH }' has completed, notifying client. 2018-10-08 18:11:56,487 INFO [Time-limited test] impl.TableBackupClient(120): Backup backup_1539022312079 started at 1539022316487. 2018-10-08 18:11:56,495 DEBUG [Time-limited test] impl.TableBackupClient(124): Backup session backup_1539022312079 has been started. 2018-10-08 18:11:56,508 DEBUG [Time-limited test] impl.IncrementalTableBackupClient(273): For incremental backup, current table set is [test-1539022262249] 2018-10-08 18:11:56,516 DEBUG [Time-limited test] impl.IncrementalBackupManager(81): StartCode 1539022249231for backupID backup_1539022312079 2018-10-08 18:11:56,516 INFO [Time-limited test] impl.IncrementalBackupManager(91): Execute roll log procedure for incremental backup ... 2018-10-08 18:11:56,518 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(880): Client=hbase//172.18.128.12 procedure request for: rolllog-proc 2018-10-08 18:11:56,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2018-10-08 18:11:56,519 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2018-10-08 18:11:56,519 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 180000 ms 2018-10-08 18:11:56,521 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2018-10-08 18:11:56,521 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2018-10-08 18:11:56,522 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:56,523 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(95): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:56,765 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2018-10-08 18:11:56,765 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(103): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,765 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/rolllog-proc/acquired 2018-10-08 18:11:56,765 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2018-10-08 18:11:56,766 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,766 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2018-10-08 18:11:56,766 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(187): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:56,766 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:56,767 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(213): start proc data length is 35 2018-10-08 18:11:56,767 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(215): Found data for znode:/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:56,767 INFO [Time-limited test-EventThread] regionserver.LogRollRegionServerProcedureManager(128): Attempting to run a roll log procedure for backup. 2018-10-08 18:11:56,767 INFO [Time-limited test-EventThread] regionserver.LogRollBackupSubprocedure(57): Constructing a LogRollBackupSubprocedure. 2018-10-08 18:11:56,767 DEBUG [Time-limited test-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2018-10-08 18:11:56,768 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(159): Starting subprocedure 'rolllog' with timeout 60000ms 2018-10-08 18:11:56,768 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2018-10-08 18:11:56,771 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' starting 'acquire' stage 2018-10-08 18:11:56,771 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(169): Subprocedure 'rolllog' locally acquired 2018-10-08 18:11:56,771 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(244): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure (rolllog) in zk 2018-10-08 18:11:56,807 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(252): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:56,807 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,808 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,808 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:56,808 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/rolllog-proc 2018-10-08 18:11:56,808 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2018-10-08 18:11:56,808 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(174): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2018-10-08 18:11:56,808 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:56,809 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:56,809 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:56,810 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,810 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:56,811 DEBUG [Time-limited test-EventThread] procedure.Procedure(298): member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' joining acquired barrier for procedure 'rolllog' on coordinator 2018-10-08 18:11:56,811 DEBUG [Time-limited test-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@4e2efcb6[Count = 0] remaining members to acquire global barrier 2018-10-08 18:11:56,811 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2018-10-08 18:11:56,811 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(119): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:56,849 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:56,849 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:56,849 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(129): Received reached global barrier:/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:56,850 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,850 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' received 'reached' from coordinator. 2018-10-08 18:11:56,850 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2018-10-08 18:11:56,850 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] regionserver.LogRollBackupSubprocedurePool(86): Waiting for backup procedure to finish. 2018-10-08 18:11:56,850 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(76): DRPC started: cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,851 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(93): Trying to roll log in backup subprocedure, current log number: 1539022301371 highest: 1539022301371 on cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,851 DEBUG [regionserver/cn012:0.logRoller] regionserver.LogRoller(178): WAL roll requested 2018-10-08 18:11:56,863 DEBUG [RS-EventLoopGroup-3-36] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK] 2018-10-08 18:11:56,868 INFO [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(680): Rolled WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022301347.meta with entries=0, filesize=83 B; new WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022316852.meta 2018-10-08 18:11:56,868 DEBUG [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(773): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK]] 2018-10-08 18:11:56,869 INFO [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(661): Archiving hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022301347.meta to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/oldWALs/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022301347.meta 2018-10-08 18:11:56,873 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022301347.meta not finished, retry = 0 2018-10-08 18:11:56,888 DEBUG [RS-EventLoopGroup-3-38] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(783): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK] 2018-10-08 18:11:56,892 INFO [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(680): Rolled WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 with entries=117, filesize=17.35 KB; new WAL /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022316880 2018-10-08 18:11:56,892 DEBUG [regionserver/cn012:0.logRoller] wal.AbstractFSWAL(773): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK]] 2018-10-08 18:11:56,895 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 not finished, retry = 0 2018-10-08 18:11:56,912 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(100): log roll took 61 2018-10-08 18:11:56,912 INFO [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(101): After roll log in backup subprocedure, current log number: 1539022316880 on cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,926 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] client.ConnectionImplementation(672): Table backup:system should be available 2018-10-08 18:11:56,926 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] impl.BackupSystemTable(244): Backup table backup:system exists and available 2018-10-08 18:11:56,928 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] client.ConnectionImplementation(672): Table backup:system_bulk should be available 2018-10-08 18:11:56,929 DEBUG [rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1] impl.BackupSystemTable(244): Backup table backup:system_bulk exists and available 2018-10-08 18:11:56,932 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(190): Subprocedure 'rolllog' locally completed 2018-10-08 18:11:56,932 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.ZKProcedureMemberRpcs(268): Marking procedure 'rolllog' completed for member 'cn012.l42scl.hortonworks.com,37486,1539022239614' in zk 2018-10-08 18:11:56,960 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,960 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(195): Subprocedure 'rolllog' has notified controller of completion 2018-10-08 18:11:56,960 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:56,960 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,962 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:56,962 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/rolllog-proc 2018-10-08 18:11:56,960 DEBUG [member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1] procedure.Subprocedure(220): Subprocedure 'rolllog' completed. 2018-10-08 18:11:56,963 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:56,964 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:56,964 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:56,965 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,965 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:56,966 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:56,966 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:56,967 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(223): Finished data from procedure 'rolllog' member 'cn012.l42scl.hortonworks.com,37486,1539022239614': 2018-10-08 18:11:56,967 DEBUG [Time-limited test-EventThread] procedure.Procedure(329): Member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2018-10-08 18:11:56,967 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2018-10-08 18:11:56,967 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(230): Running finish phase. 2018-10-08 18:11:56,967 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2018-10-08 18:11:56,967 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureCoordinator(166): Attempting to clean out zk node for op:rolllog 2018-10-08 18:11:56,967 INFO [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] procedure.ZKProcedureUtil(286): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2018-10-08 18:11:57,016 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:57,016 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(78): Received created event:/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:57,016 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:57,016 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:57,017 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureCoordinator$1(198): Node created: /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:57,017 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(246): Current zk system: 2018-10-08 18:11:57,017 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(248): |-/1/rolllog-proc 2018-10-08 18:11:57,017 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2018-10-08 18:11:57,017 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/rolllog-proc/abort 2018-10-08 18:11:57,017 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2018-10-08 18:11:57,017 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-abort 2018-10-08 18:11:57,017 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:57,018 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:57,018 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2018-10-08 18:11:57,019 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-acquired 2018-10-08 18:11:57,020 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:57,020 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] zookeeper.ZKUtil(355): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:57,021 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:57,021 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-reached 2018-10-08 18:11:57,022 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |----rolllog 2018-10-08 18:11:57,022 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureUtil(265): |-------cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:57,058 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2018-10-08 18:11:57,058 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.LogRollMasterProcedureManager(146): Done waiting - exec procedure for rolllog 2018-10-08 18:11:57,058 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.LogRollMasterProcedureManager(147): Distributed roll log procedure is successful! 2018-10-08 18:11:57,058 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(105): Received procedure start children changed event: /1/rolllog-proc/acquired 2018-10-08 18:11:57,058 DEBUG [(cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2018-10-08 18:11:57,058 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(155): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2018-10-08 18:11:57,061 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(614): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Unable to get data of znode /1/rolllog-proc/abort/rolllog because node does not exist (not an error) 2018-10-08 18:11:57,061 DEBUG [Time-limited test] client.HBaseAdmin(2859): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 6666 ms per retry) 2018-10-08 18:11:57,061 DEBUG [Time-limited test] client.HBaseAdmin(2868): (#1) Sleeping: 100ms while waiting for procedure completion. 2018-10-08 18:11:57,061 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:57,061 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2018-10-08 18:11:57,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:57,062 INFO [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs$1(108): Received procedure abort children changed event: /1/rolllog-proc/abort 2018-10-08 18:11:57,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2018-10-08 18:11:57,062 DEBUG [Time-limited test-EventThread] procedure.ZKProcedureMemberRpcs(135): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2018-10-08 18:11:57,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:11:57,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:57,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2018-10-08 18:11:57,062 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2018-10-08 18:11:57,162 DEBUG [Time-limited test] client.HBaseAdmin(2874): Getting current status of procedure from master... 2018-10-08 18:11:57,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1132): Checking to see if procedure from request:rolllog-proc is done 2018-10-08 18:11:57,169 DEBUG [Time-limited test] impl.IncrementalBackupManager(244): In getLogFilesForNewBackup() olderTimestamps: {cn012.l42scl.hortonworks.com:37486=1539022249231} newestTimestamps: {cn012.l42scl.hortonworks.com:37486=1539022301371} 2018-10-08 18:11:57,175 DEBUG [Time-limited test] impl.IncrementalBackupManager$NewestLogFilter(381): Skip .meta log file: cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022246561.meta 2018-10-08 18:11:57,175 DEBUG [Time-limited test] impl.IncrementalBackupManager$NewestLogFilter(381): Skip .meta log file: cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022316852.meta 2018-10-08 18:11:57,175 DEBUG [Time-limited test] impl.IncrementalBackupManager(289): currentLogFile: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 2018-10-08 18:11:57,175 DEBUG [Time-limited test] impl.IncrementalBackupManager(289): currentLogFile: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022316880 2018-10-08 18:11:57,176 DEBUG [Time-limited test] impl.IncrementalBackupManager(325): Skip .meta log file: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/oldWALs/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022301347.meta 2018-10-08 18:11:57,610 DEBUG [Time-limited test] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/backupUT/backup_1539022312079/default/test-1539022262249/.tabledesc/.tableinfo.0000000001 2018-10-08 18:11:57,612 DEBUG [Time-limited test] util.BackupUtils(145): Attempting to copy table info for:test-1539022262249 target: hdfs://localhost:41712/backupUT/backup_1539022312079/default/test-1539022262249 descriptor: 'test-1539022262249', {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 2018-10-08 18:11:57,612 DEBUG [Time-limited test] util.BackupUtils(147): Finished copying tableinfo. 2018-10-08 18:11:57,618 DEBUG [Time-limited test] util.BackupUtils(150): Starting to write region info for table test-1539022262249 2018-10-08 18:11:58,034 DEBUG [Time-limited test] util.BackupUtils(157): Finished writing region info for table test-1539022262249 2018-10-08 18:11:58,051 DEBUG [Time-limited test] mapreduce.WALPlayer(297): add incremental job :hdfs://localhost:41712/backupUT/.tmp/backup_1539022312079 from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 2018-10-08 18:11:58,055 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(139): Connect 0x4d2b7339 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:58,109 DEBUG [Time-limited test] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d70c4c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:58,125 INFO [RS-EventLoopGroup-1-9] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:43118, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=MasterService 2018-10-08 18:11:58,143 INFO [Time-limited test] mapreduce.HFileOutputFormat2(645): bulkload locality sensitive enabled 2018-10-08 18:11:58,143 INFO [Time-limited test] mapreduce.HFileOutputFormat2(508): Looking up current regions for table test-1539022262249 2018-10-08 18:11:58,161 INFO [RS-EventLoopGroup-3-42] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58386, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:58,167 DEBUG [Time-limited test] mapreduce.HFileOutputFormat2(518): SplitPoint startkey for table [test-1539022262249]: [test-1539022262249;] 2018-10-08 18:11:58,168 INFO [Time-limited test] mapreduce.HFileOutputFormat2(667): Configuring 1 reduce partitions to match current region count for all tables 2018-10-08 18:11:58,169 INFO [Time-limited test] mapreduce.HFileOutputFormat2(534): Writing partition information to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase-staging/partitions_f06279ab-b81f-4aaf-8b9e-9ae850c1fb5e 2018-10-08 18:11:58,629 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.HConstants, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-common/3.0.0-SNAPSHOT/hbase-common-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,630 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-protocol/3.0.0-SNAPSHOT/hbase-protocol-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,631 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-protocol-shaded/3.0.0-SNAPSHOT/hbase-protocol-shaded-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,631 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.client.Put, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-client/3.0.0-SNAPSHOT/hbase-client-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,632 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.ipc.RpcServer, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,633 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-hadoop-compat/3.0.0-SNAPSHOT/hbase-hadoop-compat-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,633 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.mapreduce.JobUtil, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-hadoop2-compat/3.0.0-SNAPSHOT/hbase-hadoop2-compat-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,742 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /mnt/disk2/a/hbase/hbase-backup/target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb/hadoop-9204436742118751577.jar 2018-10-08 18:11:58,743 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.metrics.impl.FastLongHistogram, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-metrics/3.0.0-SNAPSHOT/hbase-metrics-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,743 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.metrics.Snapshot, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-metrics-api/3.0.0-SNAPSHOT/hbase-metrics-api-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,744 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.zookeeper.ZooKeeper, using jar /home/hbase/.m2/repository/org/apache/zookeeper/zookeeper/3.4.10/zookeeper-3.4.10.jar 2018-10-08 18:11:58,744 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hbase.thirdparty.io.netty.channel.Channel, using jar /home/hbase/.m2/repository/org/apache/hbase/thirdparty/hbase-shaded-netty/2.1.0/hbase-shaded-netty-2.1.0.jar 2018-10-08 18:11:58,745 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.google.protobuf.Message, using jar /home/hbase/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2018-10-08 18:11:58,745 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations, using jar /home/hbase/.m2/repository/org/apache/hbase/thirdparty/hbase-shaded-protobuf/2.1.0/hbase-shaded-protobuf-2.1.0.jar 2018-10-08 18:11:58,745 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hbase.thirdparty.com.google.common.collect.Lists, using jar /home/hbase/.m2/repository/org/apache/hbase/thirdparty/hbase-shaded-miscellaneous/2.1.0/hbase-shaded-miscellaneous-2.1.0.jar 2018-10-08 18:11:58,746 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.htrace.core.Tracer, using jar /home/hbase/.m2/repository/org/apache/htrace/htrace-core4/4.2.0-incubating/htrace-core4-4.2.0-incubating.jar 2018-10-08 18:11:58,746 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.codahale.metrics.MetricRegistry, using jar /home/hbase/.m2/repository/io/dropwizard/metrics/metrics-core/3.2.1/metrics-core-3.2.1.jar 2018-10-08 18:11:58,747 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.commons.lang3.ArrayUtils, using jar /home/hbase/.m2/repository/org/apache/commons/commons-lang3/3.6/commons-lang3-3.6.jar 2018-10-08 18:11:58,747 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.fasterxml.jackson.databind.ObjectMapper, using jar /home/hbase/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar 2018-10-08 18:11:58,747 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.fasterxml.jackson.core.Versioned, using jar /home/hbase/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.2/jackson-core-2.9.2.jar 2018-10-08 18:11:58,748 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class com.fasterxml.jackson.annotation.JsonView, using jar /home/hbase/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.2/jackson-annotations-2.9.2.jar 2018-10-08 18:11:58,748 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.zookeeper.ZKWatcher, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-zookeeper/3.0.0-SNAPSHOT/hbase-zookeeper-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,750 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-common/3.0.0-SNAPSHOT/hbase-common-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,835 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.util.MapReduceExtendedCell, using jar /mnt/disk2/a/hbase/hbase-backup/target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb/hadoop-282794890416840407.jar 2018-10-08 18:11:58,836 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /mnt/disk2/a/hbase/hbase-backup/target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb/hadoop-282794890416840407.jar 2018-10-08 18:11:58,837 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-common/3.0.0-SNAPSHOT/hbase-common-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,837 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.util.MapReduceExtendedCell, using jar /mnt/disk2/a/hbase/hbase-backup/target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb/hadoop-282794890416840407.jar 2018-10-08 18:11:58,838 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.mapreduce.MultiTableHFileOutputFormat, using jar /mnt/disk2/a/hbase/hbase-backup/target/test-data/43c1e832-0e89-f597-f10b-b76a002e64fb/hadoop-282794890416840407.jar 2018-10-08 18:11:58,838 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /home/hbase/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.1/hadoop-mapreduce-client-core-3.1.1.jar 2018-10-08 18:11:58,838 INFO [Time-limited test] mapreduce.HFileOutputFormat2(687): Incremental output configured for tables: default:test-1539022262249 2018-10-08 18:11:58,839 INFO [Time-limited test] client.ConnectionImplementation(1801): Closing master protocol: MasterService 2018-10-08 18:11:58,839 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x4d2b7339 to localhost:54078 2018-10-08 18:11:58,839 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:58,843 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hbase.thirdparty.com.google.common.base.Preconditions, using jar /home/hbase/.m2/repository/org/apache/hbase/thirdparty/hbase-shaded-miscellaneous/2.1.0/hbase-shaded-miscellaneous-2.1.0.jar 2018-10-08 18:11:58,844 DEBUG [Time-limited test] mapreduce.TableMapReduceUtil(965): For class org.apache.hadoop.hbase.regionserver.wal.WALCellCodec, using jar /home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar 2018-10-08 18:11:58,860 WARN [Time-limited test] mapreduce.JobResourceUploader(147): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-10-08 18:11:58,888 WARN [Time-limited test] mapreduce.JobResourceUploader(480): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2018-10-08 18:11:58,897 DEBUG [Time-limited test] mapreduce.WALInputFormat(308): Scanning hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 for WAL files 2018-10-08 18:11:58,900 INFO [Time-limited test] mapreduce.WALInputFormat(324): Found: HdfsLocatedFileStatus{path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371; isDirectory=false; length=17779; replication=1; blocksize=268435456; modification_time=1539022316997; access_time=1539022301376; owner=hbase.hfs.0; group=supergroup; permission=rw-r--r--; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} 2018-10-08 18:11:59,053 WARN [Time-limited test] fs.FileUtil(1075): Command 'ln -s /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_local/1539022318977/libjars /mnt/disk2/a/hbase/hbase-backup/libjars/*' failed 1 with: ln: failed to create symbolic link ‘/mnt/disk2/a/hbase/hbase-backup/libjars/*’: No such file or directory 2018-10-08 18:11:59,054 WARN [Time-limited test] mapred.LocalDistributedCacheManager(202): Failed to create symlink: /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/mapred_local/1539022318977/libjars <- /mnt/disk2/a/hbase/hbase-backup/libjars/* 2018-10-08 18:11:59,158 INFO [LocalJobRunner Map Task Executor #0] mapreduce.WALInputFormat$WALRecordReader(157): Opening reader for hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 (-9223372036854775808:9223372036854775807) length:17779 2018-10-08 18:11:59,185 INFO [LocalJobRunner Map Task Executor #0] mapreduce.WALInputFormat$WALRecordReader(209): Reached end of file. 2018-10-08 18:11:59,185 INFO [LocalJobRunner Map Task Executor #0] mapreduce.WALInputFormat$WALRecordReader(245): Closing reader 2018-10-08 18:11:59,378 INFO [pool-118-thread-1] zookeeper.ReadOnlyZKClient(139): Connect 0x15bd4597 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:11:59,452 DEBUG [pool-118-thread-1] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c63cd7f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:11:59,465 INFO [RS-EventLoopGroup-3-44] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58412, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:11:59,468 INFO [pool-118-thread-1] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x15bd4597 to localhost:54078 2018-10-08 18:11:59,468 DEBUG [pool-118-thread-1] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:11:59,471 DEBUG [pool-118-thread-1] mapreduce.HFileOutputFormat2$1(323): first rowkey: [row-t10] 2018-10-08 18:11:59,473 DEBUG [pool-118-thread-1] mapreduce.HFileOutputFormat2$1(335): use favored nodes writer: cn012.l42scl.hortonworks.com 2018-10-08 18:11:59,475 INFO [pool-118-thread-1] hfile.CacheConfig(262): Created cacheConfig: blockCache=LruBlockCache{blockCount=8, currentSize=761.79 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=761.79 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:11:59,497 WARN [Thread-873] hdfs.DataStreamer(1854): These favored nodes were specified but not chosen: [cn012.l42scl.hortonworks.com:37486] Specified favored nodes: [cn012.l42scl.hortonworks.com:37486] 2018-10-08 18:12:00,134 DEBUG [Time-limited test] impl.IncrementalTableBackupClient(333): Incremental copy HFiles is starting. dest=hdfs://localhost:41712/backupUT 2018-10-08 18:12:00,134 DEBUG [Time-limited test] impl.IncrementalTableBackupClient(343): Setting incremental copy HFiles job name to : Incremental_Backup-HFileCopy-backup_1539022312079 2018-10-08 18:12:00,134 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob(390): Doing COPY_TYPE_DISTCP 2018-10-08 18:12:00,176 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob(399): DistCp options: [hdfs://localhost:41712/backupUT/.tmp/backup_1539022312079, hdfs://localhost:41712/backupUT] 2018-10-08 18:12:00,301 WARN [Time-limited test] mapreduce.JobResourceUploader(147): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-10-08 18:12:01,470 INFO [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: 1.0 mapProgress: 1.0 2018-10-08 18:12:01,854 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_608368834_22 at /127.0.0.1:39062 [Receiving block BP-827454334-172.18.128.12-1539022232083:blk_1073741879_1055]] datanode.BlockReceiver(733): Slow BlockReceiver write data to disk cost:381ms (threshold=300ms), volume=file:/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data1/, blockId=1073741879 2018-10-08 18:12:01,855 INFO [AsyncFSWAL-0] wal.AbstractFSWAL(959): Slow sync cost: 382 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32877,DS-0430b48e-0911-4297-8877-48cfe5842d70,DISK]] 2018-10-08 18:12:01,856 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob(140): Backup progress data "100%" has been updated to backup system table for backup_1539022312079 2018-10-08 18:12:01,857 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(234): Backup progress data updated to backup system table: "Progress: 100.0% - 8627 bytes copied." 2018-10-08 18:12:01,858 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(244): DistCp job-id: job_local1769007647_0003 completed: true true 2018-10-08 18:12:01,881 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(247): Counters: 25 File System Counters FILE: Number of bytes read=2479173 FILE: Number of bytes written=2594481 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=147573 HDFS: Number of bytes written=2500051 HDFS: Number of read operations=519 HDFS: Number of large read operations=0 HDFS: Number of write operations=257 Map-Reduce Framework Map input records=6 Map output records=0 Input split bytes=293 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=2829582336 File Input Format Counters Bytes Read=1466 File Output Format Counters Bytes Written=0 DistCp Counters Bandwidth in Btyes=8627 Bytes Copied=8627 Bytes Expected=8627 Files Copied=2 DIR_COPY=4 2018-10-08 18:12:01,882 DEBUG [Time-limited test] impl.IncrementalTableBackupClient(354): Incremental copy HFiles from hdfs://localhost:41712/backupUT/.tmp/backup_1539022312079 to hdfs://localhost:41712/backupUT finished. 2018-10-08 18:12:01,884 DEBUG [Time-limited test] impl.BackupSystemTable(1021): add :hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.1539022301371 2018-10-08 18:12:01,908 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,908 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,909 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,909 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,909 DEBUG [Time-limited test] impl.BackupSystemTable(477): found orig hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ for f of table be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,909 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,909 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,910 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,910 DEBUG [Time-limited test] impl.BackupSystemTable(1872): bulk row string bulk:test-1539022262249:be1bf5445faddb63e45726410a07917a:f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ region be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,910 DEBUG [Time-limited test] impl.BackupSystemTable(477): found orig hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ for f of table be1bf5445faddb63e45726410a07917a 2018-10-08 18:12:01,915 INFO [Time-limited test] impl.IncrementalTableBackupClient(212): Copy 2 active bulk loaded files. Attempt =1 2018-10-08 18:12:01,915 DEBUG [Time-limited test] impl.IncrementalTableBackupClient(333): Incremental copy HFiles is starting. dest=hdfs://localhost:41712/backupUT/backup_1539022312079 2018-10-08 18:12:01,915 DEBUG [Time-limited test] impl.IncrementalTableBackupClient(343): Setting incremental copy HFiles job name to : Incremental_Backup-HFileCopy-backup_1539022312079 2018-10-08 18:12:01,915 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob(390): Doing COPY_TYPE_DISTCP 2018-10-08 18:12:01,950 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob(399): DistCp options: [hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_, hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_, hdfs://localhost:41712/backupUT/backup_1539022312079] 2018-10-08 18:12:02,003 WARN [Time-limited test] mapreduce.JobResourceUploader(147): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-10-08 18:12:03,067 WARN [Thread-933] mapred.LocalJobRunner$Job(590): job_local1175594345_0004 java.io.IOException: Inconsistent sequence file: current chunk file org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ length = 5142 aclEntries = null, xAttrs = null} at org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) at org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) 2018-10-08 18:12:03,150 INFO [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: 1.0 mapProgress: 1.0 2018-10-08 18:12:03,155 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob(140): Backup progress data "100%" has been updated to backup system table for backup_1539022312079 2018-10-08 18:12:03,155 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(234): Backup progress data updated to backup system table: "Progress: 100.0% - 10242 bytes copied." 2018-10-08 18:12:03,156 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(244): DistCp job-id: job_local1175594345_0004 completed: true false 2018-10-08 18:12:03,169 DEBUG [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(247): Counters: 24 File System Counters FILE: Number of bytes read=2667912 FILE: Number of bytes written=3573866 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=157815 HDFS: Number of bytes written=2510293 HDFS: Number of read operations=552 HDFS: Number of large read operations=0 HDFS: Number of write operations=265 Map-Reduce Framework Map input records=2 Map output records=0 Input split bytes=294 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=2829582336 File Input Format Counters Bytes Read=864 File Output Format Counters Bytes Written=0 DistCp Counters Bandwidth in Btyes=10242 Bytes Copied=10242 Bytes Expected=10242 Files Copied=2 2018-10-08 18:12:03,170 ERROR [Time-limited test] tools.DistCp(167): Exception encountered java.lang.Exception: DistCp job-id: job_local1175594345_0004 failed at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:249) at org.apache.hadoop.tools.DistCp.run(DistCp.java:153) at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:408) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:348) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.copyBulkLoadedFiles(IncrementalTableBackupClient.java:219) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.handleBulkLoad(IncrementalTableBackupClient.java:198) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:320) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605) at org.apache.hadoop.hbase.backup.TestIncrementalBackupWithBulkLoad.TestIncBackupDeleteTable(TestIncrementalBackupWithBulkLoad.java:104) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) 2018-10-08 18:12:03,170 ERROR [Time-limited test] impl.IncrementalTableBackupClient(350): Copy incremental HFile files failed with return code: -999. 2018-10-08 18:12:03,172 WARN [Time-limited test] impl.IncrementalTableBackupClient(367): Could not delete hdfs://localhost:41712/backupUT/.tmp/backup_1539022312079 2018-10-08 18:12:03,174 DEBUG [Time-limited test] impl.IncrementalTableBackupClient(264): 0 files have been archived. 2018-10-08 18:12:03,174 ERROR [Time-limited test] impl.TableBackupClient(235): Unexpected Exception : Failed copy from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_,hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ to hdfs://localhost:41712/backupUT/backup_1539022312079 java.io.IOException: Failed copy from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_,hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ to hdfs://localhost:41712/backupUT/backup_1539022312079 at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:351) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.copyBulkLoadedFiles(IncrementalTableBackupClient.java:219) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.handleBulkLoad(IncrementalTableBackupClient.java:198) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:320) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605) at org.apache.hadoop.hbase.backup.TestIncrementalBackupWithBulkLoad.TestIncBackupDeleteTable(TestIncrementalBackupWithBulkLoad.java:104) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) 2018-10-08 18:12:03,175 ERROR [Time-limited test] impl.TableBackupClient(248): BackupId=backup_1539022312079,startts=1539022316487,failedts=1539022323175,failedphase=INCREMENTAL_COPY,failedmessage=Failed copy from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_,hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ to hdfs://localhost:41712/backupUT/backup_1539022312079 2018-10-08 18:12:03,175 DEBUG [Time-limited test] impl.BackupSystemTable(1631): Restoring backup:system from snapshot 2018-10-08 18:12:03,187 INFO [Time-limited test] client.HBaseAdmin$15(922): Started disable of backup:system 2018-10-08 18:12:03,195 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$10(2554): Client=hbase//172.18.128.12 disable backup:system 2018-10-08 18:12:03,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=35, state=RUNNABLE:DISABLE_TABLE_PREPARE, hasLock=false; DisableTableProcedure table=backup:system 2018-10-08 18:12:03,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=35 2018-10-08 18:12:03,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=35 2018-10-08 18:12:03,691 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022323691}]},"ts":1539022323691} 2018-10-08 18:12:03,695 INFO [PEWorker-5] hbase.MetaTableAccessor(1700): Updated tableName=backup:system, state=DISABLING in hbase:meta 2018-10-08 18:12:03,749 INFO [PEWorker-5] procedure.DisableTableProcedure(295): Set backup:system to state=DISABLING 2018-10-08 18:12:03,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=35 2018-10-08 18:12:03,856 INFO [PEWorker-5] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=36, ppid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, UNASSIGN}] 2018-10-08 18:12:03,921 INFO [PEWorker-5] procedure.MasterProcedureScheduler(689): pid=36, ppid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, UNASSIGN checking lock on 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:03,973 INFO [PEWorker-5] assignment.RegionStateStore(200): pid=36 updating hbase:meta row=29493d1f83444b313854401df15f30aa, regionState=CLOSING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:12:03,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=37, ppid=36, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.CloseRegionProcedure}] 2018-10-08 18:12:04,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=35 2018-10-08 18:12:04,333 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] handler.UnassignRegionHandler(102): Close 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:04,334 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1554): Closing 29493d1f83444b313854401df15f30aa, disabling compactions & flushes 2018-10-08 18:12:04,334 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1594): Updates disabled for region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:04,334 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(2647): Flushing 2/2 column families, dataSize=1.44 KB heapSize=2.59 KB 2018-10-08 18:12:04,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=35 2018-10-08 18:12:04,752 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=923 B at sequenceid=28 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/meta/0f0b731c61fa4bf88161308837faab72 2018-10-08 18:12:05,175 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=552 B at sequenceid=28 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/0590bb74f8ad4148b48f1a62798abe17 2018-10-08 18:12:05,187 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/meta/0f0b731c61fa4bf88161308837faab72 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/0f0b731c61fa4bf88161308837faab72 2018-10-08 18:12:05,196 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/0f0b731c61fa4bf88161308837faab72, entries=6, sequenceid=28, filesize=5.9 K 2018-10-08 18:12:05,198 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/0590bb74f8ad4148b48f1a62798abe17 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/0590bb74f8ad4148b48f1a62798abe17 2018-10-08 18:12:05,206 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/0590bb74f8ad4148b48f1a62798abe17, entries=1, sequenceid=28, filesize=5.0 K 2018-10-08 18:12:05,208 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(2856): Finished flush of dataSize ~1.44 KB/1475, heapSize ~2.56 KB/2624, currentSize=0 B/0 for 29493d1f83444b313854401df15f30aa in 874ms, sequenceid=28, compaction requested=true 2018-10-08 18:12:05,223 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system/29493d1f83444b313854401df15f30aa/recovered.edits/31.seqid, newMaxSeqId=31, maxSeqId=1 2018-10-08 18:12:05,224 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:05,228 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1711): Closed backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:05,229 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=36 updating hbase:meta row=29493d1f83444b313854401df15f30aa, regionState=CLOSED 2018-10-08 18:12:05,234 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] handler.UnassignRegionHandler(124): Closed 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:05,414 INFO [PEWorker-6] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=36, ppid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=true; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, UNASSIGN; resume parent processing. 2018-10-08 18:12:05,414 INFO [PEWorker-6] procedure2.ProcedureExecutor(1507): Finished pid=37, ppid=36, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.CloseRegionProcedure in 1.3050sec 2018-10-08 18:12:05,547 INFO [PEWorker-16] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=35, state=RUNNABLE:DISABLE_TABLE_ADD_REPLICATION_BARRIER, hasLock=true; DisableTableProcedure table=backup:system; resume parent processing. 2018-10-08 18:12:05,548 INFO [PEWorker-16] procedure2.ProcedureExecutor(1507): Finished pid=36, ppid=35, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, UNASSIGN in 1.5580sec 2018-10-08 18:12:05,598 DEBUG [PEWorker-9] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022325597}]},"ts":1539022325597} 2018-10-08 18:12:05,602 INFO [PEWorker-9] hbase.MetaTableAccessor(1700): Updated tableName=backup:system, state=DISABLED in hbase:meta 2018-10-08 18:12:05,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=35 2018-10-08 18:12:05,607 INFO [PEWorker-9] procedure.DisableTableProcedure(307): Set backup:system to state=DISABLED 2018-10-08 18:12:05,773 INFO [PEWorker-9] procedure2.ProcedureExecutor(1507): Finished pid=35, state=SUCCESS, hasLock=false; DisableTableProcedure table=backup:system in 2.4610sec 2018-10-08 18:12:07,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=35 2018-10-08 18:12:07,606 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: DISABLE, Table Name: backup:system, procId: 35 completed 2018-10-08 18:12:07,619 INFO [Time-limited test] client.HBaseAdmin(2727): Taking restore-failsafe snapshot: hbase-failsafe-snapshot_backup_system-1539022327619 2018-10-08 18:12:07,621 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1492): Client=hbase//172.18.128.12 snapshot request for:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } 2018-10-08 18:12:07,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotDescriptionUtils(313): Creation time not specified, setting to:1539022327621 (current time:1539022327621). 2018-10-08 18:12:07,623 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(139): Connect 0x2d7a55c0 to localhost:54078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2018-10-08 18:12:07,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62280faa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2018-10-08 18:12:07,650 INFO [RS-EventLoopGroup-3-46] ipc.ServerRpcConnection(556): Connection from 172.18.128.12:58584, version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth:SIMPLE), service=ClientService 2018-10-08 18:12:07,651 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x2d7a55c0 to localhost:54078 2018-10-08 18:12:07,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:12:07,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(565): No existing snapshot, attempting snapshot... 2018-10-08 18:12:07,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(620): Table is disabled, running snapshot entirely on master. 2018-10-08 18:12:07,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=38, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:12:07,807 DEBUG [PEWorker-7] locking.LockProcedure(309): LOCKED pid=38, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:12:07,876 INFO [PEWorker-7] procedure2.TimeoutExecutorThread(82): ADDED pid=38, state=WAITING_TIMEOUT, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE; timeout=600000, timestamp=1539022927876 2018-10-08 18:12:07,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(622): Started snapshot: { ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } 2018-10-08 18:12:07,877 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(175): Running DISABLED table snapshot hbase-failsafe-snapshot_backup_system-1539022327619 C_M_SNAPSHOT_TABLE on table backup:system 2018-10-08 18:12:07,877 DEBUG [Time-limited test] client.HBaseAdmin(2585): Waiting a max of 300000 ms for snapshot '{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH }'' to complete. (max 6666 ms per retry) 2018-10-08 18:12:07,877 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#1) Sleeping: 100ms while waiting for snapshot completion. 2018-10-08 18:12:07,978 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:12:07,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } is done 2018-10-08 18:12:07,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:12:07,980 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#2) Sleeping: 200ms while waiting for snapshot completion. 2018-10-08 18:12:08,180 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:12:08,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } is done 2018-10-08 18:12:08,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:12:08,183 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#3) Sleeping: 300ms while waiting for snapshot completion. 2018-10-08 18:12:08,302 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.DisabledTableSnapshotHandler(99): Starting to write region info and WALs for regions for offline snapshot:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=DISABLED } 2018-10-08 18:12:08,307 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(294): Storing region-info for snapshot. 2018-10-08 18:12:08,307 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(299): Creating references for hfiles 2018-10-08 18:12:08,316 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(352): Adding snapshot references for [hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/0f0b731c61fa4bf88161308837faab72, hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a] hfiles 2018-10-08 18:12:08,316 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(360): Adding reference for hfile (1/2): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/0f0b731c61fa4bf88161308837faab72 2018-10-08 18:12:08,317 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(360): Adding reference for hfile (2/2): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a 2018-10-08 18:12:08,322 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(352): Adding snapshot references for [hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/0590bb74f8ad4148b48f1a62798abe17, hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df, hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949] hfiles 2018-10-08 18:12:08,322 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(360): Adding reference for hfile (1/3): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/0590bb74f8ad4148b48f1a62798abe17 2018-10-08 18:12:08,323 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(360): Adding reference for hfile (2/3): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df 2018-10-08 18:12:08,324 DEBUG [DisabledTableSnapshot-pool37-t1] snapshot.SnapshotManifest(360): Adding reference for hfile (3/3): hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949 2018-10-08 18:12:08,484 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:12:08,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } is done 2018-10-08 18:12:08,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:12:08,490 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#4) Sleeping: 500ms while waiting for snapshot completion. 2018-10-08 18:12:08,738 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.DisabledTableSnapshotHandler(121): Marking snapshot{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=DISABLED } as finished. 2018-10-08 18:12:08,739 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifest(478): Convert to Single Snapshot Manifest 2018-10-08 18:12:08,740 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.SnapshotManifestV1(128): No regions under directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/hbase-failsafe-snapshot_backup_system-1539022327619 2018-10-08 18:12:08,990 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:12:08,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } is done 2018-10-08 18:12:08,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:12:08,996 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#5) Sleeping: 1000ms while waiting for snapshot completion. 2018-10-08 18:12:09,174 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(265): Sentinel is done, just moving the snapshot from hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/hbase-failsafe-snapshot_backup_system-1539022327619 to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/hbase-failsafe-snapshot_backup_system-1539022327619 2018-10-08 18:12:09,996 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:12:10,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } is done 2018-10-08 18:12:10,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(388): Snapshoting '{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH }' is still in progress! 2018-10-08 18:12:10,002 DEBUG [Time-limited test] client.HBaseAdmin(2594): (#6) Sleeping: 2000ms while waiting for snapshot completion. 2018-10-08 18:12:10,010 INFO [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(222): Snapshot hbase-failsafe-snapshot_backup_system-1539022327619 of table backup:system completed 2018-10-08 18:12:10,011 DEBUG [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(235): Launching cleanup of working dir:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/hbase-failsafe-snapshot_backup_system-1539022327619 2018-10-08 18:12:10,011 ERROR [MASTER_TABLE_OPERATIONS-master/cn012:0-0] snapshot.TakeSnapshotHandler(240): Couldn't delete snapshot working directory:hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.hbase-snapshot/.tmp/hbase-failsafe-snapshot_backup_system-1539022327619 2018-10-08 18:12:10,016 DEBUG [PEWorker-8] locking.LockProcedure(240): UNLOCKED pid=38, state=RUNNABLE, hasLock=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE 2018-10-08 18:12:10,181 INFO [PEWorker-8] procedure2.ProcedureExecutor(1507): Finished pid=38, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=backup:system, type=EXCLUSIVE in 2.3550sec 2018-10-08 18:12:10,736 WARN [HBase-Metrics2-1] impl.MetricsConfig(134): Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2018-10-08 18:12:12,002 DEBUG [Time-limited test] client.HBaseAdmin(2600): Getting current status of snapshot from master... 2018-10-08 18:12:12,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1157): Checking to see if snapshot from request:{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH } is done 2018-10-08 18:12:12,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(385): Snapshot '{ ss=hbase-failsafe-snapshot_backup_system-1539022327619 table=backup:system type=FLUSH }' has completed, notifying client. 2018-10-08 18:12:12,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=39, state=RUNNABLE:RESTORE_SNAPSHOT_PRE_OPERATION, hasLock=false; RestoreSnapshotProcedure (table=backup:system snapshot=name: "snapshot_backup_system" table: "backup:system" creation_time: 1539022312126 type: FLUSH version: 2 ) 2018-10-08 18:12:12,235 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(846): Restore snapshot=snapshot_backup_system as table=backup:system 2018-10-08 18:12:12,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=39 2018-10-08 18:12:12,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=39 2018-10-08 18:12:12,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=39 2018-10-08 18:12:12,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=39 2018-10-08 18:12:12,899 DEBUG [PEWorker-10] util.FSTableDescriptors(683): Wrote into hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/.tabledesc/.tableinfo.0000000002 2018-10-08 18:12:12,903 DEBUG [PEWorker-10] util.FSTableDescriptors(629): Deleted hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/.tabledesc/.tableinfo.0000000001 2018-10-08 18:12:12,903 INFO [PEWorker-10] util.FSTableDescriptors(594): Updated tableinfo=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/.tabledesc/.tableinfo.0000000002 2018-10-08 18:12:12,956 INFO [PEWorker-10] procedure.RestoreSnapshotProcedure(389): Starting restore snapshot={ ss=snapshot_backup_system table=backup:system type=FLUSH } 2018-10-08 18:12:12,969 INFO [PEWorker-10] snapshot.RestoreSnapshotHelper(184): starting restore table regions using snapshot=name: "snapshot_backup_system" table: "backup:system" creation_time: 1539022312126 type: FLUSH version: 2 2018-10-08 18:12:12,970 DEBUG [PEWorker-10] snapshot.RestoreSnapshotHelper(801): get table regions: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system 2018-10-08 18:12:12,977 DEBUG [PEWorker-10] snapshot.RestoreSnapshotHelper(810): found 1 regions for table=backup:system 2018-10-08 18:12:12,977 INFO [PEWorker-10] snapshot.RestoreSnapshotHelper(230): region to restore: 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:12,993 DEBUG [RestoreSnapshot-pool41-t1] backup.HFileArchiver(444): Archived from FileablePath, hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/0f0b731c61fa4bf88161308837faab72 to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/archive/data/backup/system/29493d1f83444b313854401df15f30aa/meta/0f0b731c61fa4bf88161308837faab72 2018-10-08 18:12:12,999 DEBUG [RestoreSnapshot-pool41-t1] backup.HFileArchiver(444): Archived from FileablePath, hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/0590bb74f8ad4148b48f1a62798abe17 to hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/archive/data/backup/system/29493d1f83444b313854401df15f30aa/session/0590bb74f8ad4148b48f1a62798abe17 2018-10-08 18:12:13,000 INFO [PEWorker-10] snapshot.RestoreSnapshotHelper(273): finishing restore table regions using snapshot=name: "snapshot_backup_system" table: "backup:system" creation_time: 1539022312126 type: FLUSH version: 2 2018-10-08 18:12:13,057 DEBUG [PEWorker-10] hbase.MetaTableAccessor(2180): Delete {"totalColumns":1,"row":"backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":1539022333056}]},"ts":9223372036854775807} 2018-10-08 18:12:13,065 INFO [PEWorker-10] hbase.MetaTableAccessor(1868): Deleted 1 regions from META 2018-10-08 18:12:13,065 DEBUG [PEWorker-10] hbase.MetaTableAccessor(1869): Deleted regions: [{ENCODED => 29493d1f83444b313854401df15f30aa, NAME => 'backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.', STARTKEY => '', ENDKEY => ''}] 2018-10-08 18:12:13,066 DEBUG [PEWorker-10] hbase.MetaTableAccessor(2180): Put {"totalColumns":2,"row":"backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":1539022333057},{"qualifier":"state","vlen":6,"tag":[],"timestamp":1539022333057}]},"ts":1539022333057} 2018-10-08 18:12:13,070 INFO [PEWorker-10] hbase.MetaTableAccessor(1555): Added 1 regions to meta. 2018-10-08 18:12:13,070 INFO [PEWorker-10] hbase.MetaTableAccessor(1890): Overwritten 1 regions to Meta 2018-10-08 18:12:13,071 DEBUG [PEWorker-10] hbase.MetaTableAccessor(1891): Overwritten regions: [{ENCODED => 29493d1f83444b313854401df15f30aa, NAME => 'backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.', STARTKEY => '', ENDKEY => ''}] 2018-10-08 18:12:13,072 INFO [PEWorker-10] procedure.RestoreSnapshotProcedure(468): Restore snapshot={ ss=snapshot_backup_system table=backup:system type=FLUSH } on table=backup:system completed! 2018-10-08 18:12:13,314 INFO [PEWorker-10] procedure2.ProcedureExecutor(1507): Finished pid=39, state=SUCCESS, hasLock=false; RestoreSnapshotProcedure (table=backup:system snapshot=name: "snapshot_backup_system" table: "backup:system" creation_time: 1539022312126 type: FLUSH version: 2 ) in 1.0990sec 2018-10-08 18:12:13,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=39 2018-10-08 18:12:13,361 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: MODIFY, Table Name: backup:system, procId: 39 completed 2018-10-08 18:12:13,361 INFO [Time-limited test] client.HBaseAdmin(2763): Deleting restore-failsafe snapshot: hbase-failsafe-snapshot_backup_system-1539022327619 2018-10-08 18:12:13,363 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(686): Client=hbase//172.18.128.12 delete name: "hbase-failsafe-snapshot_backup_system-1539022327619" 2018-10-08 18:12:13,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(315): Deleting snapshot: hbase-failsafe-snapshot_backup_system-1539022327619 2018-10-08 18:12:13,372 INFO [Time-limited test] client.HBaseAdmin$14(857): Started enable of backup:system 2018-10-08 18:12:13,381 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.HMaster$9(2521): Client=hbase//172.18.128.12 enable backup:system 2018-10-08 18:12:13,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] procedure2.ProcedureExecutor(1124): Stored pid=40, state=RUNNABLE:ENABLE_TABLE_PREPARE, hasLock=false; EnableTableProcedure table=backup:system 2018-10-08 18:12:13,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=40 2018-10-08 18:12:13,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=40 2018-10-08 18:12:13,697 INFO [PEWorker-11] procedure.EnableTableProcedure(368): Attempting to enable the table backup:system 2018-10-08 18:12:13,698 DEBUG [PEWorker-11] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022333698}]},"ts":1539022333698} 2018-10-08 18:12:13,702 INFO [PEWorker-11] hbase.MetaTableAccessor(1700): Updated tableName=backup:system, state=ENABLING in hbase:meta 2018-10-08 18:12:13,778 INFO [PEWorker-11] procedure.EnableTableProcedure(142): 0 META entries added for the given regionReplicaCount 1 for the table backup:system 2018-10-08 18:12:13,779 DEBUG [PEWorker-11] procedure.EnableTableProcedure(146): There is no change to the number of region replicas. Assigning the available regions. Current and previousreplica count is 1 2018-10-08 18:12:13,779 INFO [PEWorker-11] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=41, ppid=40, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN}] 2018-10-08 18:12:13,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=40 2018-10-08 18:12:13,873 INFO [PEWorker-12] procedure.MasterProcedureScheduler(689): pid=41, ppid=40, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN checking lock on 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:13,923 INFO [PEWorker-12] assignment.TransitRegionStateProcedure(160): Starting pid=41, ppid=40, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN; rit=CLOSED, location=null; forceNewPlan=false, retain=false 2018-10-08 18:12:14,077 INFO [PEWorker-1] assignment.RegionStateStore(200): pid=41 updating hbase:meta row=29493d1f83444b313854401df15f30aa, regionState=OPENING, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:12:14,082 INFO [PEWorker-1] procedure2.ProcedureExecutor(1738): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] 2018-10-08 18:12:14,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=40 2018-10-08 18:12:14,332 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(101): Open backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:14,332 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7217): Opening region: {ENCODED => 29493d1f83444b313854401df15f30aa, NAME => 'backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.', STARTKEY => '', ENDKEY => ''} 2018-10-08 18:12:14,333 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] coprocessor.CoprocessorHost(160): System coprocessor org.apache.hadoop.hbase.backup.BackupObserver loaded, priority=536870911. 2018-10-08 18:12:14,333 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.MetricsRegionSourceImpl(75): Creating new MetricsRegionSourceImpl for table system 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,333 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(836): Instantiated backup:system,,1539022287674.29493d1f83444b313854401df15f30aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2018-10-08 18:12:14,333 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7256): checking encryption for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,333 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(7261): checking classloading for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,340 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta 2018-10-08 18:12:14,341 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta 2018-10-08 18:12:14,342 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] hfile.CacheConfig(239): Created cacheConfig for meta: blockCache=LruBlockCache{blockCount=8, currentSize=761.79 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=761.79 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:12:14,343 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:12:14,359 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] regionserver.HStore(582): loaded hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a 2018-10-08 18:12:14,359 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] regionserver.HStore(327): Store=meta, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:12:14,362 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session 2018-10-08 18:12:14,362 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] util.CommonFSUtils(598): Set storagePolicy=HOT for path=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session 2018-10-08 18:12:14,363 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] hfile.CacheConfig(239): Created cacheConfig for session: blockCache=LruBlockCache{blockCount=8, currentSize=761.79 KB, freeSize=994.86 MB, maxSize=995.60 MB, heapSize=761.79 KB, minSize=945.82 MB, minFactor=0.95, multiSize=472.91 MB, multiFactor=0.5, singleSize=236.46 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2018-10-08 18:12:14,364 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] compactions.CompactionConfiguration(147): size [128 MB, 8.00 EB, 8.00 EB); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2018-10-08 18:12:14,377 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] regionserver.HStore(582): loaded hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df 2018-10-08 18:12:14,386 DEBUG [StoreOpener-29493d1f83444b313854401df15f30aa-1] regionserver.HStore(582): loaded hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949 2018-10-08 18:12:14,386 INFO [StoreOpener-29493d1f83444b313854401df15f30aa-1] regionserver.HStore(327): Store=session, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50 2018-10-08 18:12:14,386 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(949): replaying wal for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,390 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,392 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(4611): Found 0 recovered edits file(s) under hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system/29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,392 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(957): stopping wal replay for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,392 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(969): Cleaning up temporary data for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,393 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(980): Cleaning up detritus for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,395 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table backup:system descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0M)) instead. 2018-10-08 18:12:14,396 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1005): writing seq id for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,397 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1009): Opened 29493d1f83444b313854401df15f30aa; next sequenceid=32 2018-10-08 18:12:14,397 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegion(1016): Running coprocessor post-open hooks for 29493d1f83444b313854401df15f30aa 2018-10-08 18:12:14,398 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2198): Post open deploy tasks for backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:14,408 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HStore(582): loaded hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/meta/b5fc04967580469080d43be998276c7a 2018-10-08 18:12:14,419 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HStore(582): loaded hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/1c82b52cf5bd40d894d18380212371df 2018-10-08 18:12:14,426 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HStore(582): loaded hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/71375c40605f4c24904246837fdc4949 2018-10-08 18:12:14,431 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] assignment.RegionStateStore(200): pid=41 updating hbase:meta row=29493d1f83444b313854401df15f30aa, regionState=OPEN, openSeqNum=32, regionLocation=cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:12:14,434 DEBUG [RS_OPEN_REGION-regionserver/cn012:0-0] regionserver.HRegionServer(2222): Finished post open deploy task for backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:14,434 INFO [RS_OPEN_REGION-regionserver/cn012:0-0] handler.AssignRegionHandler(138): Opened backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:14,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=40 2018-10-08 18:12:14,876 INFO [PEWorker-13] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=41, ppid=40, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN; resume parent processing. 2018-10-08 18:12:14,877 INFO [PEWorker-13] procedure2.ProcedureExecutor(1507): Finished pid=42, ppid=41, state=SUCCESS, hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure in 440msec 2018-10-08 18:12:15,122 INFO [PEWorker-15] procedure2.ProcedureExecutor(1878): Finished subprocedure(s) of pid=40, state=RUNNABLE:ENABLE_TABLE_SET_ENABLED_TABLE_STATE, hasLock=false; EnableTableProcedure table=backup:system; resume parent processing. 2018-10-08 18:12:15,123 INFO [PEWorker-15] procedure2.ProcedureExecutor(1507): Finished pid=41, ppid=40, state=SUCCESS, hasLock=false; TransitRegionStateProcedure table=backup:system, region=29493d1f83444b313854401df15f30aa, ASSIGN in 1.0970sec 2018-10-08 18:12:15,231 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2180): Put {"totalColumns":1,"row":"backup:system","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1539022335231}]},"ts":1539022335231} 2018-10-08 18:12:15,237 INFO [PEWorker-3] hbase.MetaTableAccessor(1700): Updated tableName=backup:system, state=ENABLED in hbase:meta 2018-10-08 18:12:15,249 INFO [PEWorker-3] procedure.EnableTableProcedure(386): Table 'backup:system' was successfully enabled. 2018-10-08 18:12:15,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1507): Finished pid=40, state=SUCCESS, hasLock=false; EnableTableProcedure table=backup:system in 1.9140sec 2018-10-08 18:12:15,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(1175): Checking to see if procedure is done pid=40 2018-10-08 18:12:15,665 INFO [Time-limited test] client.HBaseAdmin$TableFuture(3721): Operation: ENABLE, Table Name: backup:system, procId: 40 completed 2018-10-08 18:12:15,665 DEBUG [Time-limited test] impl.BackupSystemTable(1638): Done restoring backup system table 2018-10-08 18:12:15,666 DEBUG [Time-limited test] impl.BackupSystemTable(1665): Deleting snapshot_backup_system from the system 2018-10-08 18:12:15,677 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] master.MasterRpcServices(686): Client=hbase//172.18.128.12 delete name: "snapshot_backup_system" 2018-10-08 18:12:15,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545] snapshot.SnapshotManager(315): Deleting snapshot: snapshot_backup_system 2018-10-08 18:12:15,683 DEBUG [Time-limited test] impl.BackupSystemTable(1670): Done deleting backup system table snapshot 2018-10-08 18:12:15,683 DEBUG [Time-limited test] impl.TableBackupClient(190): Trying to cleanup up target dir. Current backup phase: INCREMENTAL_COPY 2018-10-08 18:12:15,684 DEBUG [Time-limited test] impl.TableBackupClient(205): Cleaning up uncompleted backup data at hdfs://localhost:41712/backupUT/backup_1539022312079/default/test-1539022262249 done. 2018-10-08 18:12:15,686 DEBUG [Time-limited test] impl.TableBackupClient(215): hdfs://localhost:41712/backupUT/backup_1539022312079/default is empty, remove it. 2018-10-08 18:12:15,690 DEBUG [Time-limited test] impl.BackupSystemTable(610): Finish backup exclusive operation 2018-10-08 18:12:15,701 ERROR [Time-limited test] impl.TableBackupClient(254): Backup backup_1539022312079 failed. 2018-10-08 18:12:15,842 INFO [Time-limited test] hbase.ResourceChecker(172): after: backup.TestIncrementalBackupWithBulkLoad#TestIncBackupDeleteTable Thread=832 (was 8) Potentially hanging thread: New I/O worker #47 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 43 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=13,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 12 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 6 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_608368834_22 at /127.0.0.1:39062 [Receiving block BP-827454334-172.18.128.12-1539022232083:blk_1073741879_1055] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:891) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #97 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-36 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 10 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 3 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=8,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 4 on 42555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 2 on 42555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=1,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: Timer-6 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) java.util.TimerThread.mainLoop(Timer.java:526) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server idle connection scanner for port 45980 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 9 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Time-limited test-SendThread(localhost:54078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Potentially hanging thread: New I/O worker #12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=17,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 19 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 18 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 5 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 6 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 6 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: DiskHealthMonitor-Timer java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 33 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp521274628-1081-acceptor-1@70b4876f-ServerConnector@4c2e90f3{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:42937} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-27 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: pool-11-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Timer-2 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (2114608489) connection to localhost/127.0.0.1:41712 from hbase java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1018) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1062) Potentially hanging thread: refreshUsed-/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data2/current/BP-827454334-172.18.128.12-1539022232083 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 29 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@582583ca java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 40 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 8 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 43 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp828160121-1016 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #95 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #62 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: DatanodeAdminMonitor-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: master/cn012:0:becomeActiveMaster-HFileCleaner.small.0-1539022242885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:234) Potentially hanging thread: RpcClient-timer-pool1-t1 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 31 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #65 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 8 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #86 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1896537480-377-acceptor-1@70c39cc8-ServerConnector@12fde802{HTTP/1.1,[http/1.1]}{0.0.0.0:43555} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 30 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-3-18 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: (cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool3-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 4 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Ping Checker java.lang.Thread.sleep(Native Method) org.apache.hadoop.yarn.util.AbstractLivelinessMonitor$PingChecker.run(AbstractLivelinessMonitor.java:154) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=18,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 17 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #52 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_608368834_22 at /127.0.0.1:39058 [Receiving block BP-827454334-172.18.128.12-1539022232083:blk_1073741878_1054] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:891) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: NM Event dispatcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:118) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Ping Checker java.lang.Thread.sleep(Native Method) org.apache.hadoop.yarn.util.AbstractLivelinessMonitor$PingChecker.run(AbstractLivelinessMonitor.java:154) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #34 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@47f6134bTimer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #60 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 11 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 0 on 45980 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 4 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 37 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RSProcedureDispatcher-pool3-t5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 2 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RSProcedureDispatcher-pool3-t10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O server boss #49 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.jboss.netty.channel.socket.nio.NioServerBoss.select(NioServerBoss.java:163) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PEWorker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: IPC Server handler 29 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@40edf244 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:456) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 23 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 5 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 18 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 15 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=11,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 0 on 33055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@71ed6bb7Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 16 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 44 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 45 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=16,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=22,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: refreshUsed-/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data1/current/BP-827454334-172.18.128.12-1539022232083 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 5 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=5,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 4 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 42 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 22 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=4,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@5fdf3c72Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 43 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: snapshot-hfile-cleaner-cache-refresher java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: qtp1074331259-740 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #70 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 10 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #42 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=16,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 32 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #43 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PEWorker-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: New I/O worker #18 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x42b80f47 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1784155225.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Timer-5 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) java.util.TimerThread.mainLoop(Timer.java:526) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: New I/O worker #58 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Async disk worker #0 for volume /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 44 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 33 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 12 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 7 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 13 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #39 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=15,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #48 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 49 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 38 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=5,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RS-EventLoopGroup-3-20 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=14,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: ResourceLocalizationService Cache Cleanup sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 46 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #56 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp521274628-1080-acceptor-0@1aa1b4b1-ServerConnector@4c2e90f3{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:42937} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 12 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp18979103-771-acceptor-1@4241de4c-ServerConnector@394100dd{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:33104} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp5144056-388-acceptor-0@2b20e0c6-ServerConnector@4143618a{HTTP/1.1,[http/1.1]}{0.0.0.0:43964} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Ping Checker java.lang.Thread.sleep(Native Method) org.apache.hadoop.yarn.util.AbstractLivelinessMonitor$PingChecker.run(AbstractLivelinessMonitor.java:154) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 8 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 45 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=6,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 39 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=10,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: qtp1036918561-40-acceptor-0@60a850d-ServerConnector@b02cad7{HTTP/1.1,[http/1.1]}{localhost:40592} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-26 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=5,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 20 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 8 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x20c1e999-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Potentially hanging thread: New I/O worker #88 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@70dde1cb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3963) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/cn012:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 11 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Timer-4 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-3-34 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 14 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 13 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Socket Reader #1 for port 42158 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: IPC Server handler 0 on 35621 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 5 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@511019e2Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 9 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp1249843020-90 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 6 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x61915206 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1784155225.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 9 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Client (2114608489) connection to localhost/127.0.0.1:41712 from hbase.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1018) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1062) Potentially hanging thread: FSEditLogAsync sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.dequeueEdit(FSEditLogAsync.java:166) org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:174) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PEWorker-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: IPC Server handler 24 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #90 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 16 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp1036918561-42-acceptor-2@3fcb9e48-ServerConnector@b02cad7{HTTP/1.1,[http/1.1]}{localhost:40592} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp521274628-1083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 9 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: NM ContainerManager dispatcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:118) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Node Removal Timer java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1036918561-38 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Socket Reader #1 for port 45292 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: New I/O worker #14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: AsyncFSWAL-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PEWorker-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=8,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 8 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp1896537480-375 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x61915206-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Potentially hanging thread: IPC Server handler 1 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-1-5 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 41712 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: New I/O worker #57 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #45 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 7 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS_CLOSE_REGION-regionserver/cn012:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 18 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 19 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: pool-5-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #89 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-1-7 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 30 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-1-6 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: pool-91-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 23 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@78d92c0Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@22992d8fTimer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: CacheReplicationMonitor(70409772) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: PEWorker-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: IPC Server handler 7 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS_OPEN_REGION-regionserver/cn012:0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp930440317-305 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=14,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@3e0538e4[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 39 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=19,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 38 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 41 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: Node Status Updater java.lang.Object.wait(Native Method) org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$StatusUpdaterRunnable.run(NodeStatusUpdaterImpl.java:1196) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #51 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Time-limited test-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@52e748deTimer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server idle connection scanner for port 42158 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp930440317-306 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server idle connection scanner for port 35621 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 9 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=20,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: VolumeScannerThread(/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:626) Potentially hanging thread: New I/O worker #32 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp521274628-1082-acceptor-2@5df66ba-ServerConnector@4c2e90f3{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:42937} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool8-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #17 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #94 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #20 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: LeaseRenewer:hbase@localhost:41712 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server idle connection scanner for port 39009 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-3-11 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Thread-135 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:681) Potentially hanging thread: New I/O worker #29 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=10,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 32 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 47 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=12,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RS-EventLoopGroup-3-31 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 0 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: NodeLabelManager dispatcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:118) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@416ec282Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1036918561-36 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: M:0;cn012:42545 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:67) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:688) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:885) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:833) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:932) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:595) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=20,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #78 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 39009 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: IPC Server idle connection scanner for port 42555 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: New I/O worker #91 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-32 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Default-IPC-NioEventLoopGroup-5-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 25 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Log Scanner/Cleaner #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=9,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 6 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #26 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: master/cn012:0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:1667) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1687) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$400(AssignmentManager.java:103) org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1629) Potentially hanging thread: IPC Server handler 14 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 28 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 17 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-4 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 10 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 46 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-3-41 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=17,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: PEWorker-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=8,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 3 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-1-8 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-38 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: threadDeathWatcher-4-1 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: rs(cn012.l42scl.hortonworks.com,37486,1539022239614)-backup-pool11-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-5 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@1ee92bf8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:246) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-15 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O server boss #98 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.jboss.netty.channel.socket.nio.NioServerBoss.select(NioServerBoss.java:163) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Socket Reader #1 for port 35621 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: RSProcedureDispatcher-pool3-t7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 14 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 26 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: StorageInfoMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4549) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-6 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Timer-1 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PEWorker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: RS-EventLoopGroup-3-42 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Log Scanner/Cleaner #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@303cd343 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Time-limited test.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:888) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: master/cn012:0:becomeActiveMaster-SendThread(localhost:54078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Potentially hanging thread: IPC Server handler 34 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RedundancyMonitor java.lang.Thread.sleep(Native Method) java.lang.Thread.sleep(Thread.java:340) java.util.concurrent.TimeUnit.sleep(TimeUnit.java:386) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4514) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RSProcedureDispatcher-pool3-t6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 41237 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: IPC Server listener on 42158 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: New I/O worker #72 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 8 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 8 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp5144056-384 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp5144056-391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp930440317-308 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 2 on 45980 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 0 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: pool-92-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #37 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #83 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #24 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: master/cn012:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-46 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Thread[Thread-263,5,FailOnTimeoutGroup] java.lang.Thread.sleep(Native Method) org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:694) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: OldWALsCleaner-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$143/809404866.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #30 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 37 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: PEWorker-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: RS-EventLoopGroup-3-17 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x61915206-SendThread(localhost:54078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Potentially hanging thread: IPC Server handler 15 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 5 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=7,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 1 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=14,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: pool-95-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=19,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #76 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 0 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp1074331259-742-acceptor-0@6551702-ServerConnector@4d733c6a{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:36633} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp930440317-309-acceptor-0@67876766-ServerConnector@abac572{HTTP/1.1,[http/1.1]}{0.0.0.0:36122} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Async disk worker #0 for volume /mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp18979103-772-acceptor-2@56993618-ServerConnector@394100dd{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:33104} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: BP-827454334-172.18.128.12-1539022232083 heartbeating to localhost/127.0.0.1:41712 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:714) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:841) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: NM Event dispatcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:118) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1036918561-43 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #46 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 48 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS_CLOSE_META-regionserver/cn012:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 41 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 20 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp930440317-310-acceptor-1@7aed2813-ServerConnector@abac572{HTTP/1.1,[http/1.1]}{0.0.0.0:36122} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Socket Reader #1 for port 39596 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: IPC Server handler 9 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server listener on 45980 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@69ea39eeTimer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Thread-311 java.lang.Thread.sleep(Native Method) org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager$1.run(ActivitiesManager.java:142) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1074331259-738 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Public Localizer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ExecutorCompletionService.take(ExecutorCompletionService.java:193) org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.run(ResourceLocalizationService.java:961) Potentially hanging thread: IPC Server handler 1 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #50 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: nioEventLoopGroup-2-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:754) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:410) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-28 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 47 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp1074331259-744-acceptor-2@5ceb9cbb-ServerConnector@4d733c6a{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:36633} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: SyncThread:0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=23,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server idle connection scanner for port 41712 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-3 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=21,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@323d2b51 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 3 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp5144056-386 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 3 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: OldWALsCleaner-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:181) org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$0(LogCleaner.java:159) org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$143/809404866.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 46 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RM Event dispatcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:118) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=1,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: SplitLogWorker-cn012:37486 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination.taskLoop(ZkSplitLogWorkerCoordination.java:461) org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:219) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 41239 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: IPC Server handler 3 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 6 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 40 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: Ping Checker java.lang.Thread.sleep(Native Method) org.apache.hadoop.yarn.util.AbstractLivelinessMonitor$PingChecker.run(AbstractLivelinessMonitor.java:154) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #53 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-14 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #71 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #92 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: pool-3-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-43 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-35 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1750455784_22 at /127.0.0.1:38696 [Waiting for operation #89] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:71) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 2 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp828160121-1015-acceptor-1@5f63693d-ServerConnector@54512ed9{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:43378} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1074331259-739 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp828160121-1017-acceptor-2@3f785c55-ServerConnector@54512ed9{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:43378} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 4 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 0 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp828160121-1011 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp930440317-307 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PEWorker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: IPC Server handler 2 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: WALProcedureStoreSyncThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:809) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:106) org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:308) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool2-t1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1249843020-88 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: (cn012.l42scl.hortonworks.com,42545,1539022237747)-proc-coordinator-pool2-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 2 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 31 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 15 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server idle connection scanner for port 39596 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 8 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: ProcessThread(sid:0 cport:54078): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122) Potentially hanging thread: master/cn012:0:becomeActiveMaster-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=17,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 0 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=16,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@40dbac71Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@75991ab0Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 35 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@3e016894 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: MASTER_TABLE_OPERATIONS-master/cn012:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RM StateStore dispatcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:118) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 7 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-24 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #77 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: regionserver/cn012:0.logRoller java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:167) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool5-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server idle connection scanner for port 36298 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp828160121-1013 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 3 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 1 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=3,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RS-EventLoopGroup-3-9 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 34 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 17 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 5 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x20c1e999-SendThread(localhost:54078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Potentially hanging thread: RS-EventLoopGroup-3-19 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #96 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #93 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 2 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #19 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 22 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #25 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Timer-0 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 48 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp828160121-1012 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Thread[Thread-304,5,FailOnTimeoutGroup] java.lang.Thread.sleep(Native Method) org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:694) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 25 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=27,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: JvmPauseMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:154) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp18979103-767 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp521274628-1079 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #44 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PacketResponder: BP-827454334-172.18.128.12-1539022232083:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1330) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1402) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=17,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@2419e055Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 45292 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=27,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: master/cn012:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=16,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=6,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #27 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-23 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1896537480-376-acceptor-0@1d8f20e0-ServerConnector@12fde802{HTTP/1.1,[http/1.1]}{0.0.0.0:43555} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@8471a00Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ResponseProcessor for block BP-827454334-172.18.128.12-1539022232083:blk_1073741829_1005 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:547) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086) Potentially hanging thread: IPC Server handler 7 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #63 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: pool-99-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@711a51a7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 35621 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: qtp1074331259-743-acceptor-1@4d17959f-ServerConnector@4d733c6a{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:36633} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=3,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: qtp1074331259-745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@2aa9224 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:4096) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: MemStoreFlusher.0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 2 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 7 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=12,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: qtp5144056-390-acceptor-2@60933791-ServerConnector@4143618a{HTTP/1.1,[http/1.1]}{0.0.0.0:43964} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Socket Reader #1 for port 41239 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: IPC Server handler 35 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #59 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=9,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 6 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 0 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #87 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-13 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #55 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Timer for 'JobTracker' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Client (2114608489) connection to localhost/127.0.0.1:41712 from hbase java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1018) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1062) Potentially hanging thread: IPC Server handler 5 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 38 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #79 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PacketResponder: BP-827454334-172.18.128.12-1539022232083:blk_1073741878_1054, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1330) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1402) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS:0;cn012:37486 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:67) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1016) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:184) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:130) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:168) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:341) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:165) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #40 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_PRIORITY_REGION-regionserver/cn012:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 15 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@682758abTimer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 9 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 45 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 11 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-3-33 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: SchedulerEventDispatcher:Event Processor sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492) java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680) org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:59) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #85 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Public Localizer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ExecutorCompletionService.take(ExecutorCompletionService.java:193) org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.run(ResourceLocalizationService.java:961) Potentially hanging thread: IPC Server handler 20 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp828160121-1014-acceptor-0@382bb83c-ServerConnector@54512ed9{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:43378} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Socket Reader #1 for port 39009 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: IPC Server handler 14 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: ReplicationExecutor-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 44 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: NM ContainerManager dispatcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:118) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 17 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=4,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: MobFileCache #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=14,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 36 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 4 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 10 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 1 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: PEWorker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=7,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: PEWorker-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: IPC Server handler 48 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: master/cn012:0.Chore.1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=21,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: qtp18979103-768 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-12 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RegionServerTracker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 13 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: DiskHealthMonitor-Timer java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #67 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS:0;cn012:37486-longCompactions-1539022246113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 24 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Close-WAL-Writer-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 11 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: PEWorker-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: qtp5144056-387 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: LeaseRenewer:hbase@localhost:41712 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-37 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 36 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3806) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=15,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 16 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=13,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 31 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 34 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Socket Reader #1 for port 36298 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-7 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 4 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #35 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: regionserver/cn012:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:95) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-1-9 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 16 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 10 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-3-10 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #28 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=8,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 29 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RSProcedureDispatcher-pool3-t8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x20c1e999 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1784155225.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 7 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #81 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x21347a00 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1784155225.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 49 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: ProcedureDispatcherTimeoutThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.DelayQueue.take(DelayQueue.java:211) org.apache.hadoop.hbase.procedure2.util.DelayedUtil.takeWithoutInterrupt(DelayedUtil.java:78) org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher$TimeoutExecutorThread.run(RemoteProcedureDispatcher.java:294) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 17 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 21 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Time-limited test-SendThread(localhost:54078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Potentially hanging thread: qtp1249843020-89-acceptor-0@311e46ff-ServerConnector@6cbd884e{HTTP/1.1,[http/1.1]}{localhost:40342} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1896537480-379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: pool-10-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 1 on 42555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #21 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 42555 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: PacketResponder: BP-827454334-172.18.128.12-1539022232083:blk_1073741879_1055, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1330) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1402) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=18,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@59128432 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server listener on 36298 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: IPC Server handler 27 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: LeaseRenewer:hbase.hfs.0@localhost:41712 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-45 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 28 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@cd8626cTimer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 1 on 45980 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=7,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=18,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: qtp5144056-385 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: pool-93-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 15 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 40 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #23 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: hconnection-0x5b7c6101-shared-pool6-t21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Default-IPC-NioEventLoopGroup-5-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 19 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp1036918561-39 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #74 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@a83321Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 7 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=15,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server idle connection scanner for port 41237 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 9 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: regionserver/cn012:0.Chore.1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-29 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 12 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #64 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 0 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: IPC Server handler 36 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: qtp18979103-769 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PEWorker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=22,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=11,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x41064d23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1784155225.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-21 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 4 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 4 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 30 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 2 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #36 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #84 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-39 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=11,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: qtp1036918561-41-acceptor-1@7b6469e4-ServerConnector@b02cad7{HTTP/1.1,[http/1.1]}{localhost:40592} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-16 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #66 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 26 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 11 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: PEWorker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: pool-8-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 27 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp930440317-311-acceptor-2@4dd1bc5b-ServerConnector@abac572{HTTP/1.1,[http/1.1]}{0.0.0.0:36122} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: SessionTracker java.lang.Object.wait(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146) Potentially hanging thread: IPC Server handler 19 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 1 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 1 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 42 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: PEWorker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: IPC Server handler 47 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp1896537480-373 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #38 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-25 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1036918561-37 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #41 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ResourceLocalizationService Cache Cleanup sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp828160121-1010 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS-EventLoopGroup-3-44 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=10,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RS-EventLoopGroup-1-4 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: master/cn012:0:becomeActiveMaster-HFileCleaner.large.0-1539022242885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:250) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:219) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: Ping Checker java.lang.Thread.sleep(Native Method) org.apache.hadoop.yarn.util.AbstractLivelinessMonitor$PingChecker.run(AbstractLivelinessMonitor.java:154) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1896537480-372 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RS_OPEN_REGION-regionserver/cn012:0-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp930440317-312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #31 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 3 on 45980 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Block report processor sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4873) org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4862) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=18,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #75 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 49 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Socket Reader #1 for port 42555 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: New I/O worker #69 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 5 on 39596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: cn012:37486Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 18 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 13 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 42 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #73 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=7,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: LruBlockCacheStatsExecutor sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 6 on 41237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 22 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 32 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Close-WAL-Writer-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp5144056-389-acceptor-1@7fc5b45c-ServerConnector@4143618a{HTTP/1.1,[http/1.1]}{0.0.0.0:43964} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Socket Reader #1 for port 41712 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=5,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1184345299_22 at /127.0.0.1:38268 [Receiving block BP-827454334-172.18.128.12-1539022232083:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:891) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: master/cn012:0.splitLogManager..Chore.1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Socket Reader #1 for port 45980 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: IPC Server listener on 39596 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@801b6b3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4005) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 25 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 0 on 42555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #22 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: RS-EventLoopGroup-3-30 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1896537480-378-acceptor-2@1df01852-ServerConnector@12fde802{HTTP/1.1,[http/1.1]}{0.0.0.0:43555} sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: DataStreamer for file /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/MasterProcWALs/pv2-00000000000000000001.log block BP-827454334-172.18.128.12-1539022232083:blk_1073741829_1005 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:681) Potentially hanging thread: RS-EventLoopGroup-3-22 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 14 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@293913fd sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp521274628-1077 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1074331259-741 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 12 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 24 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Socket Reader #1 for port 41237 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@47cd14b1Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: PEWorker-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:159) org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:141) org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1998) Potentially hanging thread: IPC Server idle connection scanner for port 45292 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: New I/O worker #33 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ReadOnlyZKClient-localhost:54078@0x7b9a705f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:313) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$91/1784155225.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 4 on 45980 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@3cb526d9Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=13,queue=1,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 41 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: regionserver/cn012:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:75) Potentially hanging thread: Thread[Thread-296,5,FailOnTimeoutGroup] java.lang.Thread.sleep(Native Method) org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:694) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: Time-limited test-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Potentially hanging thread: qtp521274628-1078 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ApplicationMaster Launcher sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread.run(ApplicationMasterLauncher.java:119) Potentially hanging thread: IPC Server listener on 33055 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener.run(Server.java:1155) Potentially hanging thread: IPC Server handler 3 on 42555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@1c13a315Timer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=10,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #80 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.eclipse.jetty.server.session.HashSessionManager@3a2445bbTimer sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=23,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #54 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=6,queue=0,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: New I/O worker #10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 27 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 3 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: pool-98-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 21 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: NIOServerCxn.Factory:0.0.0.0/0.0.0.0:54078 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 37 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 18 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp18979103-773 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563) org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp1896537480-374 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 35 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 26 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: member: 'cn012.l42scl.hortonworks.com,37486,1539022239614' subprocedure-pool6-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server idle connection scanner for port 33055 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: New I/O worker #68 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp521274628-1076 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 28 on 45292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-3-40 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: ProcExecTimeout sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.take(DelayQueue.java:223) org.apache.hadoop.hbase.procedure2.util.DelayedUtil.takeWithoutInterrupt(DelayedUtil.java:78) org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:56) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=6,queue=0,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 3 on 41712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server idle connection scanner for port 41239 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 39 on 42158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: qtp18979103-770-acceptor-0@7f2e8819-ServerConnector@394100dd{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:33104} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:371) org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 1 on 41239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@f55b03e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4f9c68d1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: qtp18979103-766 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243) org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100) org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147) org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server Responder sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1330) org.apache.hadoop.ipc.Server$Responder.run(Server.java:1313) Potentially hanging thread: IPC Server handler 0 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: Node Status Updater java.lang.Object.wait(Native Method) org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$StatusUpdaterRunnable.run(NodeStatusUpdaterImpl.java:1196) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: pool-96-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 13 on 39009 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: IPC Server handler 16 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=15,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 19 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: MemStoreFlusher.1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:336) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: New I/O worker #82 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 33 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RSProcedureDispatcher-pool3-t9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: IPC Server handler 23 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: RS-EventLoopGroup-3-8 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:122) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:235) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:252) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=42545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: VolumeScannerThread(/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:626) Potentially hanging thread: Default-IPC-NioEventLoopGroup-5-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) java.lang.Thread.run(Thread.java:748) Potentially hanging thread: RpcServer.priority.FPBQ.Fifo.handler=11,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: Socket Reader #1 for port 33055 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1093) org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1072) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=13,queue=1,port=37486 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:104) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Potentially hanging thread: IPC Server handler 21 on 36298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2664) Potentially hanging thread: New I/O worker #61 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68) org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434) org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212) org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) - Thread LEAK? -, OpenFileDescriptor=1557 (was 205) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=32000 (was 32000), SystemLoadAverage=408 (was 192) - SystemLoadAverage LEAK? -, ProcessCount=370 (was 366) - ProcessCount LEAK? -, AvailableMemoryMB=37345 (was 43235) 2018-10-08 18:12:15,846 WARN [Time-limited test] hbase.ResourceChecker(135): Thread=832 is superior to 500 2018-10-08 18:12:15,846 WARN [Time-limited test] hbase.ResourceChecker(135): OpenFileDescriptor=1557 is superior to 1024 2018-10-08 18:12:15,857 INFO [Time-limited test] hbase.HBaseTestingUtility(1228): Shutting down minicluster 2018-10-08 18:12:15,857 INFO [Time-limited test] client.ConnectionImplementation(1801): Closing master protocol: MasterService 2018-10-08 18:12:15,858 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x41064d23 to localhost:54078 2018-10-08 18:12:15,858 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:12:15,858 DEBUG [Time-limited test] util.JVMClusterUtil(238): Shutting down HBase Cluster 2018-10-08 18:12:15,861 INFO [Time-limited test] master.ServerManager(916): Cluster shutdown requested of master=cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:12:15,866 INFO [Time-limited test] procedure2.ProcedureExecutor(705): Stopping 2018-10-08 18:12:15,866 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2018-10-08 18:12:15,866 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2018-10-08 18:12:15,868 INFO [Time-limited test] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x42b80f47 to localhost:54078 2018-10-08 18:12:15,868 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2018-10-08 18:12:15,869 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2018-10-08 18:12:15,868 DEBUG [Time-limited test] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:12:15,869 INFO [Time-limited test] regionserver.HRegionServer(2160): ***** STOPPING region server 'cn012.l42scl.hortonworks.com,37486,1539022239614' ***** 2018-10-08 18:12:15,869 INFO [Time-limited test] regionserver.HRegionServer(2174): STOPPED: Shutdown requested 2018-10-08 18:12:15,869 INFO [RS:0;cn012:37486] regionserver.SplitLogWorker(241): Sending interrupt to stop the worker thread 2018-10-08 18:12:15,870 INFO [RS:0;cn012:37486] regionserver.HRegionServer(1032): Stopping infoServer 2018-10-08 18:12:15,870 INFO [SplitLogWorker-cn012:37486] regionserver.SplitLogWorker(223): SplitLogWorker interrupted. Exiting. 2018-10-08 18:12:15,871 INFO [SplitLogWorker-cn012:37486] regionserver.SplitLogWorker(232): SplitLogWorker cn012.l42scl.hortonworks.com,37486,1539022239614 exiting 2018-10-08 18:12:15,884 INFO [RS:0;cn012:37486] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@29e0936b{/,null,UNAVAILABLE}{jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/regionserver} 2018-10-08 18:12:15,890 INFO [RS:0;cn012:37486] server.AbstractConnector(318): Stopped ServerConnector@12fde802{HTTP/1.1,[http/1.1]}{0.0.0.0:0} 2018-10-08 18:12:15,891 INFO [RS:0;cn012:37486] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@5b4194e7{/static,jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/static,UNAVAILABLE} 2018-10-08 18:12:15,892 INFO [RS:0;cn012:37486] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@167c9561{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:15,894 INFO [RS:0;cn012:37486] regionserver.HeapMemoryManager(221): Stopping 2018-10-08 18:12:15,894 INFO [RS:0;cn012:37486] flush.RegionServerFlushTableProcedureManager(116): Stopping region server flush procedure manager gracefully. 2018-10-08 18:12:15,895 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.1 exiting 2018-10-08 18:12:15,897 INFO [RS:0;cn012:37486] regionserver.LogRollRegionServerProcedureManager(108): Stopping RegionServerBackupManager gracefully. 2018-10-08 18:12:15,895 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(383): MemStoreFlusher.0 exiting 2018-10-08 18:12:15,900 INFO [RS:0;cn012:37486] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2018-10-08 18:12:15,904 INFO [RS:0;cn012:37486] regionserver.HRegionServer(1070): stopping server cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:12:15,904 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1554): Closing 597b3222c11323d82584b9711fb2a2c8, disabling compactions & flushes 2018-10-08 18:12:15,908 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1554): Closing 94bac9ca44593231733270505a40a07a, disabling compactions & flushes 2018-10-08 18:12:15,908 DEBUG [RS:0;cn012:37486] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-10-08 18:12:15,906 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(1554): Closing cea8b370d2c8987401a9e1fa10290c45, disabling compactions & flushes 2018-10-08 18:12:15,908 INFO [RS:0;cn012:37486] client.ConnectionImplementation(1801): Closing master protocol: MasterService 2018-10-08 18:12:15,908 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1594): Updates disabled for region backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:12:15,908 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1594): Updates disabled for region ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:12:15,909 INFO [RS:0;cn012:37486] zookeeper.ReadOnlyZKClient(350): Close zookeeper connection 0x20c1e999 to localhost:54078 2018-10-08 18:12:15,909 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(1594): Updates disabled for region ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:12:15,909 DEBUG [RS:0;cn012:37486] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:12:15,909 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(2647): Flushing 2/2 column families, dataSize=2.82 KB heapSize=4.50 KB 2018-10-08 18:12:15,910 INFO [RS:0;cn012:37486] regionserver.CompactSplit(431): Waiting for Split Thread to finish... 2018-10-08 18:12:15,910 INFO [RS:0;cn012:37486] regionserver.CompactSplit(431): Waiting for Large Compaction Thread to finish... 2018-10-08 18:12:15,910 INFO [RS:0;cn012:37486] regionserver.CompactSplit(431): Waiting for Small Compaction Thread to finish... 2018-10-08 18:12:15,919 INFO [RS:0;cn012:37486] regionserver.HRegionServer(1400): Waiting on 8 regions to close 2018-10-08 18:12:15,919 DEBUG [RS:0;cn012:37486] regionserver.HRegionServer(1404): Online Regions={597b3222c11323d82584b9711fb2a2c8=ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8., cea8b370d2c8987401a9e1fa10290c45=ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45., 94bac9ca44593231733270505a40a07a=backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a., 59e0b46d9fd65e74c2c583b12693382d=hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d., be1bf5445faddb63e45726410a07917a=test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a., 1588230740=hbase:meta,,1.1588230740, 29493d1f83444b313854401df15f30aa=backup:system,,1539022287674.29493d1f83444b313854401df15f30aa., a5b65c0ba00fd6a2f67397f742450e8c=ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c.} 2018-10-08 18:12:15,920 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(1554): Closing 1588230740, disabling compactions & flushes 2018-10-08 18:12:15,920 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(1594): Updates disabled for region hbase:meta,,1.1588230740 2018-10-08 18:12:15,920 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(2647): Flushing 3/3 column families, dataSize=11.29 KB heapSize=19.70 KB 2018-10-08 18:12:15,929 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns3/test-15390222622492/cea8b370d2c8987401a9e1fa10290c45/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-10-08 18:12:15,931 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns4/test-15390222622493/597b3222c11323d82584b9711fb2a2c8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2018-10-08 18:12:15,931 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:15,933 INFO [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(1711): Closed ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:12:15,933 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:15,933 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] handler.CloseRegionHandler(129): Closed ns3:test-15390222622492,,1539022277024.cea8b370d2c8987401a9e1fa10290c45. 2018-10-08 18:12:15,936 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(1554): Closing 59e0b46d9fd65e74c2c583b12693382d, disabling compactions & flushes 2018-10-08 18:12:15,936 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(1594): Updates disabled for region hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:12:15,937 INFO [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(2647): Flushing 1/1 column families, dataSize=249 B heapSize=1.02 KB 2018-10-08 18:12:15,937 INFO [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1711): Closed ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:12:15,938 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] handler.CloseRegionHandler(129): Closed ns4:test-15390222622493,,1539022281522.597b3222c11323d82584b9711fb2a2c8. 2018-10-08 18:12:15,939 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1554): Closing be1bf5445faddb63e45726410a07917a, disabling compactions & flushes 2018-10-08 18:12:15,939 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1594): Updates disabled for region test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:12:15,967 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/recovered.edits/209.seqid, newMaxSeqId=209, maxSeqId=1 2018-10-08 18:12:15,968 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:15,970 INFO [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1711): Closed test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:12:15,971 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] handler.CloseRegionHandler(129): Closed test-1539022262249,,1539022267638.be1bf5445faddb63e45726410a07917a. 2018-10-08 18:12:15,972 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1554): Closing 29493d1f83444b313854401df15f30aa, disabling compactions & flushes 2018-10-08 18:12:15,972 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1594): Updates disabled for region backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:15,972 INFO [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(2647): Flushing 2/2 column families, dataSize=692 B heapSize=1.32 KB 2018-10-08 18:12:16,139 INFO [regionserver/cn012:0.Chore.1] hbase.ScheduledChore(180): Chore: MemstoreFlusherChore was stopped 2018-10-08 18:12:16,139 INFO [regionserver/cn012:0.Chore.1] hbase.ScheduledChore(180): Chore: CompactionChecker was stopped 2018-10-08 18:12:16,140 INFO [regionserver/cn012:0.leaseChecker] regionserver.Leases(149): Closed leases 2018-10-08 18:12:16,367 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=2.82 KB at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/.tmp/meta/1aec53ae03fa4c89b01136643f764b91 2018-10-08 18:12:16,368 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=10.42 KB at sequenceid=49 (bloomFilter=false), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/.tmp/info/8a44ef71aa9845fab3a2274426abb62a 2018-10-08 18:12:16,376 INFO [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=249 B at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/.tmp/info/963190b3efa0497bba3b9b8605d20975 2018-10-08 18:12:16,378 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.StoreFileReader(608): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8a44ef71aa9845fab3a2274426abb62a 2018-10-08 18:12:16,379 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/.tmp/meta/1aec53ae03fa4c89b01136643f764b91 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/meta/1aec53ae03fa4c89b01136643f764b91 2018-10-08 18:12:16,384 INFO [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=692 B at sequenceid=36 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/62b019ecb71d466cb49be52961f6753d 2018-10-08 18:12:16,386 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/.tmp/info/963190b3efa0497bba3b9b8605d20975 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/info/963190b3efa0497bba3b9b8605d20975 2018-10-08 18:12:16,394 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system_bulk/94bac9ca44593231733270505a40a07a/meta/1aec53ae03fa4c89b01136643f764b91, entries=8, sequenceid=6, filesize=6.5 K 2018-10-08 18:12:16,396 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=894 B at sequenceid=49 (bloomFilter=false), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/.tmp/table/3a8d9af8ba604dd1a3e351110622a333 2018-10-08 18:12:16,396 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(2856): Finished flush of dataSize ~2.82 KB/2888, heapSize ~4.23 KB/4336, currentSize=0 B/0 for 94bac9ca44593231733270505a40a07a in 487ms, sequenceid=6, compaction requested=false 2018-10-08 18:12:16,396 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.MetricsTableSourceImpl(124): Creating new MetricsTableSourceImpl for table 'backup:system_bulk' 2018-10-08 18:12:16,402 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/.tmp/session/62b019ecb71d466cb49be52961f6753d as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/62b019ecb71d466cb49be52961f6753d 2018-10-08 18:12:16,402 INFO [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/info/963190b3efa0497bba3b9b8605d20975, entries=7, sequenceid=11, filesize=5.0 K 2018-10-08 18:12:16,403 INFO [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(2856): Finished flush of dataSize ~249 B/249, heapSize ~1.01 KB/1032, currentSize=0 B/0 for 59e0b46d9fd65e74c2c583b12693382d in 467ms, sequenceid=11, compaction requested=false 2018-10-08 18:12:16,404 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.MetricsTableSourceImpl(124): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2018-10-08 18:12:16,407 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system_bulk/94bac9ca44593231733270505a40a07a/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2018-10-08 18:12:16,409 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:16,411 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1711): Closed backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:12:16,411 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] handler.CloseRegionHandler(129): Closed backup:system_bulk,,1539022292203.94bac9ca44593231733270505a40a07a. 2018-10-08 18:12:16,413 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1554): Closing a5b65c0ba00fd6a2f67397f742450e8c, disabling compactions & flushes 2018-10-08 18:12:16,413 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1594): Updates disabled for region ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:12:16,413 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/.tmp/info/8a44ef71aa9845fab3a2274426abb62a as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/info/8a44ef71aa9845fab3a2274426abb62a 2018-10-08 18:12:16,413 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(2647): Flushing 1/1 column families, dataSize=3.17 KB heapSize=11 KB 2018-10-08 18:12:16,416 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase/namespace/59e0b46d9fd65e74c2c583b12693382d/recovered.edits/14.seqid, newMaxSeqId=14, maxSeqId=1 2018-10-08 18:12:16,417 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:16,419 INFO [RS_CLOSE_REGION-regionserver/cn012:0-2] regionserver.HRegion(1711): Closed hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:12:16,419 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-2] handler.CloseRegionHandler(129): Closed hbase:namespace,,1539022248288.59e0b46d9fd65e74c2c583b12693382d. 2018-10-08 18:12:16,419 INFO [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/backup/system/29493d1f83444b313854401df15f30aa/session/62b019ecb71d466cb49be52961f6753d, entries=2, sequenceid=36, filesize=5.5 K 2018-10-08 18:12:16,421 INFO [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(2856): Finished flush of dataSize ~692 B/692, heapSize ~1.05 KB/1080, currentSize=0 B/0 for 29493d1f83444b313854401df15f30aa in 449ms, sequenceid=36, compaction requested=true 2018-10-08 18:12:16,430 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.StoreFileReader(608): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8a44ef71aa9845fab3a2274426abb62a 2018-10-08 18:12:16,430 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/info/8a44ef71aa9845fab3a2274426abb62a, entries=71, sequenceid=49, filesize=13.7 K 2018-10-08 18:12:16,431 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/.tmp/table/3a8d9af8ba604dd1a3e351110622a333 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/table/3a8d9af8ba604dd1a3e351110622a333 2018-10-08 18:12:16,433 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/backup/system/29493d1f83444b313854401df15f30aa/recovered.edits/39.seqid, newMaxSeqId=39, maxSeqId=31 2018-10-08 18:12:16,433 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:16,434 INFO [RS_CLOSE_REGION-regionserver/cn012:0-1] regionserver.HRegion(1711): Closed backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:16,434 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-1] handler.CloseRegionHandler(129): Closed backup:system,,1539022287674.29493d1f83444b313854401df15f30aa. 2018-10-08 18:12:16,438 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/hbase/meta/1588230740/table/3a8d9af8ba604dd1a3e351110622a333, entries=15, sequenceid=49, filesize=5.3 K 2018-10-08 18:12:16,439 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(2856): Finished flush of dataSize ~11.29 KB/11559, heapSize ~19.42 KB/19888, currentSize=0 B/0 for 1588230740 in 519ms, sequenceid=49, compaction requested=false 2018-10-08 18:12:16,439 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.MetricsTableSourceImpl(124): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2018-10-08 18:12:16,455 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/hbase/meta/1588230740/recovered.edits/52.seqid, newMaxSeqId=52, maxSeqId=1 2018-10-08 18:12:16,456 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:16,456 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-10-08 18:12:16,458 INFO [RS_CLOSE_META-regionserver/cn012:0-0] regionserver.HRegion(1711): Closed hbase:meta,,1.1588230740 2018-10-08 18:12:16,458 DEBUG [RS_CLOSE_META-regionserver/cn012:0-0] handler.CloseRegionHandler(129): Closed hbase:meta,,1.1588230740 2018-10-08 18:12:16,835 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.DefaultStoreFlusher(85): Flushed memstore data size=3.17 KB at sequenceid=103 (bloomFilter=true), to=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/.tmp/f/e0c89f8f29f244fbb3014e226ad4b219 2018-10-08 18:12:16,852 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegionFileSystem(464): Committing hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/.tmp/f/e0c89f8f29f244fbb3014e226ad4b219 as hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/f/e0c89f8f29f244fbb3014e226ad4b219 2018-10-08 18:12:16,866 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HStore(1071): Added hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/f/e0c89f8f29f244fbb3014e226ad4b219, entries=99, sequenceid=103, filesize=8.1 K 2018-10-08 18:12:16,869 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(2856): Finished flush of dataSize ~3.17 KB/3247, heapSize ~10.98 KB/11248, currentSize=0 B/0 for a5b65c0ba00fd6a2f67397f742450e8c in 456ms, sequenceid=103, compaction requested=false 2018-10-08 18:12:16,869 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.MetricsTableSourceImpl(124): Creating new MetricsTableSourceImpl for table 'ns2:test-15390222622491' 2018-10-08 18:12:16,883 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] wal.WALSplitter(696): Wrote file=hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/ns2/test-15390222622491/a5b65c0ba00fd6a2f67397f742450e8c/recovered.edits/106.seqid, newMaxSeqId=106, maxSeqId=1 2018-10-08 18:12:16,884 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] coprocessor.CoprocessorHost(288): Stop coprocessor org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:16,886 INFO [RS_CLOSE_REGION-regionserver/cn012:0-0] regionserver.HRegion(1711): Closed ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:12:16,887 DEBUG [RS_CLOSE_REGION-regionserver/cn012:0-0] handler.CloseRegionHandler(129): Closed ns2:test-15390222622491,,1539022272419.a5b65c0ba00fd6a2f67397f742450e8c. 2018-10-08 18:12:16,920 INFO [RS:0;cn012:37486] regionserver.HRegionServer(1098): stopping server cn012.l42scl.hortonworks.com,37486,1539022239614; all regions closed. 2018-10-08 18:12:16,926 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(874): complete file /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/WALs/cn012.l42scl.hortonworks.com,37486,1539022239614/cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta.1539022316852.meta not finished, retry = 0 2018-10-08 18:12:17,036 DEBUG [RS:0;cn012:37486] wal.AbstractFSWAL(858): Moved 2 WAL file(s) to /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/oldWALs 2018-10-08 18:12:17,037 INFO [RS:0;cn012:37486] wal.AbstractFSWAL(861): Closed WAL: AsyncFSWAL cn012.l42scl.hortonworks.com%2C37486%2C1539022239614.meta:.meta(num 1539022316852) 2018-10-08 18:12:17,050 DEBUG [RS:0;cn012:37486] wal.AbstractFSWAL(858): Moved 3 WAL file(s) to /user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/oldWALs 2018-10-08 18:12:17,050 INFO [RS:0;cn012:37486] wal.AbstractFSWAL(861): Closed WAL: AsyncFSWAL cn012.l42scl.hortonworks.com%2C37486%2C1539022239614:(num 1539022316880) 2018-10-08 18:12:17,051 DEBUG [RS:0;cn012:37486] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:12:17,051 INFO [RS:0;cn012:37486] regionserver.Leases(149): Closed leases 2018-10-08 18:12:17,053 INFO [RS:0;cn012:37486] hbase.ChoreService(327): Chore service for: regionserver/cn012:0 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2018-10-08 18:12:17,053 INFO [regionserver/cn012:0.logRoller] regionserver.LogRoller(222): LogRoller exiting. 2018-10-08 18:12:17,056 INFO [RS:0;cn012:37486] ipc.NettyRpcServer(144): Stopping server on /172.18.128.12:37486 2018-10-08 18:12:17,092 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/cn012.l42scl.hortonworks.com,37486,1539022239614 2018-10-08 18:12:17,092 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2018-10-08 18:12:17,092 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): regionserver:37486-0x16654dfacc40001, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2018-10-08 18:12:17,098 INFO [RS:0;cn012:37486] regionserver.HRegionServer(1154): Exiting; stopping=cn012.l42scl.hortonworks.com,37486,1539022239614; zookeeper connection closed. 2018-10-08 18:12:17,099 INFO [RegionServerTracker-0] master.RegionServerTracker(168): RegionServer ephemeral node deleted, processing expiration [cn012.l42scl.hortonworks.com,37486,1539022239614] 2018-10-08 18:12:17,099 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3a5f54f4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(222): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3a5f54f4 2018-10-08 18:12:17,099 INFO [RegionServerTracker-0] master.ServerManager(597): Cluster shutdown set; cn012.l42scl.hortonworks.com,37486,1539022239614 expired; onlineServers=0 2018-10-08 18:12:17,099 INFO [RegionServerTracker-0] regionserver.HRegionServer(2160): ***** STOPPING region server 'cn012.l42scl.hortonworks.com,42545,1539022237747' ***** 2018-10-08 18:12:17,099 INFO [RegionServerTracker-0] regionserver.HRegionServer(2174): STOPPED: Cluster shutdown set; onlineServer=0 2018-10-08 18:12:17,101 INFO [Time-limited test] util.JVMClusterUtil(326): Shutdown of 1 master(s) and 1 regionserver(s) complete 2018-10-08 18:12:17,101 DEBUG [M:0;cn012:42545] ipc.AbstractRpcClient(200): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e0c9bcc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=cn012.l42scl.hortonworks.com/172.18.128.12:0 2018-10-08 18:12:17,101 INFO [M:0;cn012:42545] regionserver.HRegionServer(1032): Stopping infoServer 2018-10-08 18:12:17,107 INFO [M:0;cn012:42545] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@16f5fbb7{/,null,UNAVAILABLE}{jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/master} 2018-10-08 18:12:17,107 DEBUG [Time-limited test-EventThread] zookeeper.ZKWatcher(478): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/master 2018-10-08 18:12:17,108 DEBUG [Time-limited test-EventThread] zookeeper.ZKUtil(357): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2018-10-08 18:12:17,109 INFO [M:0;cn012:42545] server.AbstractConnector(318): Stopped ServerConnector@abac572{HTTP/1.1,[http/1.1]}{0.0.0.0:0} 2018-10-08 18:12:17,110 INFO [M:0;cn012:42545] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@5290e690{/static,jar:file:/home/hbase/.m2/repository/org/apache/hbase/hbase-server/3.0.0-SNAPSHOT/hbase-server-3.0.0-SNAPSHOT.jar!/hbase-webapps/static,UNAVAILABLE} 2018-10-08 18:12:17,111 INFO [M:0;cn012:42545] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@4a1802a8{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:17,112 INFO [M:0;cn012:42545] regionserver.HRegionServer(1070): stopping server cn012.l42scl.hortonworks.com,42545,1539022237747 2018-10-08 18:12:17,112 DEBUG [M:0;cn012:42545] zookeeper.MetaTableLocator(642): Stopping MetaTableLocator 2018-10-08 18:12:17,114 INFO [M:0;cn012:42545] regionserver.HRegionServer(1098): stopping server cn012.l42scl.hortonworks.com,42545,1539022237747; all regions closed. 2018-10-08 18:12:17,114 DEBUG [M:0;cn012:42545] ipc.AbstractRpcClient(483): Stopping rpc client 2018-10-08 18:12:17,114 INFO [M:0;cn012:42545] master.HMaster(1433): Stopping master jetty server 2018-10-08 18:12:17,116 INFO [M:0;cn012:42545] server.AbstractConnector(318): Stopped ServerConnector@4143618a{HTTP/1.1,[http/1.1]}{0.0.0.0:0} 2018-10-08 18:12:17,118 INFO [M:0;cn012:42545] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2018-10-08 18:12:17,118 INFO [M:0;cn012:42545] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2018-10-08 18:12:17,119 INFO [M:0;cn012:42545] hbase.ChoreService(327): Chore service for: master/cn012:0 had [[ScheduledChore: Name: FlushedSequenceIdFlusher Period: 10800000 Unit: MILLISECONDS]] on shutdown 2018-10-08 18:12:17,121 DEBUG [M:0;cn012:42545] master.HMaster(1447): Stopping service threads 2018-10-08 18:12:17,122 DEBUG [M:0;cn012:42545] zookeeper.ZKUtil(614): master:42545-0x16654dfacc40000, quorum=localhost:54078, baseZNode=/1 Unable to get data of znode /1/master because node does not exist (not an error) 2018-10-08 18:12:17,122 WARN [M:0;cn012:42545] master.ActiveMasterManager(271): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2018-10-08 18:12:17,124 INFO [M:0;cn012:42545] master.ServerManager(1064): Writing .lastflushedseqids file at: hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/.lastflushedseqids 2018-10-08 18:12:17,190 INFO [master/cn012:0.splitLogManager..Chore.1] hbase.ScheduledChore(180): Chore: SplitLogManager Timeout Monitor was stopped 2018-10-08 18:12:17,550 INFO [M:0;cn012:42545] assignment.AssignmentManager(261): Stopping assignment manager 2018-10-08 18:12:17,553 INFO [M:0;cn012:42545] procedure2.RemoteProcedureDispatcher(116): Stopping procedure remote dispatcher 2018-10-08 18:12:17,554 INFO [M:0;cn012:42545] wal.WALProcedureStore(326): Stopping the WAL Procedure Store, isAbort=false 2018-10-08 18:12:17,923 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(157): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2018-10-08 18:12:17,923 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(157): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.backup.BackupObserver 2018-10-08 18:12:17,975 INFO [M:0;cn012:42545] hbase.ChoreService(327): Chore service for: master/cn012:0.splitLogManager. had [] on shutdown 2018-10-08 18:12:17,976 INFO [M:0;cn012:42545] flush.MasterFlushTableProcedureManager(81): stop: server shutting down. 2018-10-08 18:12:17,976 INFO [M:0;cn012:42545] master.LogRollMasterProcedureManager(74): stop: server shutting down. 2018-10-08 18:12:17,980 INFO [M:0;cn012:42545] ipc.NettyRpcServer(144): Stopping server on /172.18.128.12:42545 2018-10-08 18:12:18,032 DEBUG [M:0;cn012:42545] zookeeper.RecoverableZooKeeper(176): Node /1/rs/cn012.l42scl.hortonworks.com,42545,1539022237747 already deleted, retry=false 2018-10-08 18:12:18,055 INFO [M:0;cn012:42545] regionserver.HRegionServer(1154): Exiting; stopping=cn012.l42scl.hortonworks.com,42545,1539022237747; zookeeper connection closed. 2018-10-08 18:12:18,063 WARN [Time-limited test] datanode.DirectoryScanner(342): DirectoryScanner: shutdown has been called 2018-10-08 18:12:18,083 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@38b73a58{/,null,UNAVAILABLE}{/datanode} 2018-10-08 18:12:18,084 INFO [Time-limited test] server.AbstractConnector(318): Stopped ServerConnector@6cbd884e{HTTP/1.1,[http/1.1]}{localhost:0} 2018-10-08 18:12:18,084 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@525aa793{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-hdfs/3.1.1/hadoop-hdfs-3.1.1-tests.jar!/webapps/static,UNAVAILABLE} 2018-10-08 18:12:18,084 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@5a1c7b3e{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:18,092 WARN [BP-827454334-172.18.128.12-1539022232083 heartbeating to localhost/127.0.0.1:41712] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2018-10-08 18:12:18,092 WARN [BP-827454334-172.18.128.12-1539022232083 heartbeating to localhost/127.0.0.1:41712] datanode.BPServiceActor(852): Ending block pool service for: Block pool BP-827454334-172.18.128.12-1539022232083 (Datanode Uuid debf5d4d-aa2b-4e98-a5e9-4756ba54407e) service to localhost/127.0.0.1:41712 2018-10-08 18:12:18,096 WARN [refreshUsed-/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data2/current/BP-827454334-172.18.128.12-1539022232083] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2018-10-08 18:12:18,096 WARN [refreshUsed-/mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/cluster_cd2e8f85-ae53-1ae6-35ad-0e9e05d5771f/dfs/data/data1/current/BP-827454334-172.18.128.12-1539022232083] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2018-10-08 18:12:18,149 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@6a118103{/,null,UNAVAILABLE}{/hdfs} 2018-10-08 18:12:18,152 INFO [Time-limited test] server.AbstractConnector(318): Stopped ServerConnector@b02cad7{HTTP/1.1,[http/1.1]}{localhost:0} 2018-10-08 18:12:18,152 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@6c904a55{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-hdfs/3.1.1/hadoop-hdfs-3.1.1-tests.jar!/webapps/static,UNAVAILABLE} 2018-10-08 18:12:18,153 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@61063b88{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:18,168 ERROR [Time-limited test] server.ZooKeeperServer(472): ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2018-10-08 18:12:18,171 INFO [Time-limited test] zookeeper.MiniZooKeeperCluster(326): Shutdown MiniZK cluster with all ZK servers 2018-10-08 18:12:18,185 INFO [Time-limited test] hbase.HBaseTestingUtility(1235): Minicluster is down 2018-10-08 18:12:18,185 INFO [Time-limited test] hbase.HBaseTestingUtility(2747): Stopping mini mapreduce cluster... 2018-10-08 18:12:18,195 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@5fb77b78{/,null,UNAVAILABLE}{/node} 2018-10-08 18:12:18,196 INFO [Time-limited test] server.AbstractConnector(318): Stopped ServerConnector@4c2e90f3{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:0} 2018-10-08 18:12:18,196 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@4092dae5{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,UNAVAILABLE} 2018-10-08 18:12:18,197 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@7d0f0186{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:18,227 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@19fd8a37{/,null,UNAVAILABLE}{/node} 2018-10-08 18:12:18,229 INFO [Time-limited test] server.AbstractConnector(318): Stopped ServerConnector@54512ed9{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:0} 2018-10-08 18:12:18,229 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@82dd238{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,UNAVAILABLE} 2018-10-08 18:12:18,230 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@55391eaf{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:18,245 ERROR [Thread[Thread-296,5,FailOnTimeoutGroup]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(696): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2018-10-08 18:12:18,245 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@315aebbd{/,null,UNAVAILABLE}{/cluster} 2018-10-08 18:12:18,246 INFO [Time-limited test] server.AbstractConnector(318): Stopped ServerConnector@394100dd{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:0} 2018-10-08 18:12:18,247 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@541054aa{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,UNAVAILABLE} 2018-10-08 18:12:18,247 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@2ca64751{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:18,251 WARN [ApplicationMaster Launcher] amlauncher.ApplicationMasterLauncher$LauncherThread(122): org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning. 2018-10-08 18:12:18,255 ERROR [SchedulerEventDispatcher:Event Processor] event.EventDispatcher$EventProcessor(61): Returning, interrupted : java.lang.InterruptedException 2018-10-08 18:12:18,259 ERROR [Thread[Thread-304,5,FailOnTimeoutGroup]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(696): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2018-10-08 18:12:18,266 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.w.WebAppContext@be4b506{/,null,UNAVAILABLE}{/jobhistory} 2018-10-08 18:12:18,267 INFO [Time-limited test] server.AbstractConnector(318): Stopped ServerConnector@4d733c6a{HTTP/1.1,[http/1.1]}{cn012.l42scl.hortonworks.com:0} 2018-10-08 18:12:18,268 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@7941e491{/static,jar:file:/home/hbase/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.1.1/hadoop-yarn-common-3.1.1.jar!/webapps/static,UNAVAILABLE} 2018-10-08 18:12:18,268 DEBUG [master/cn012:0:becomeActiveMaster-EventThread] zookeeper.ZKWatcher(478): replicationLogCleaner-0x16654dfacc40004, quorum=localhost:54078, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2018-10-08 18:12:18,269 DEBUG [master/cn012:0:becomeActiveMaster-EventThread] zookeeper.ZKWatcher(548): replicationLogCleaner-0x16654dfacc40004, quorum=localhost:54078, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2018-10-08 18:12:18,269 INFO [Time-limited test] handler.ContextHandler(910): Stopped o.e.j.s.ServletContextHandler@6b11374e{/logs,file:///mnt/disk2/a/hbase/hbase-backup/target/test-data/6d37cdd6-5250-a21f-be99-831820dee9db/hadoop_logs/,UNAVAILABLE} 2018-10-08 18:12:18,271 ERROR [Thread[Thread-263,5,FailOnTimeoutGroup]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(696): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2018-10-08 18:12:18,271 INFO [Time-limited test] hbase.HBaseTestingUtility(2750): Mini mapreduce cluster stopped